WO2014119555A1 - Image processing device, display device and program - Google Patents

Image processing device, display device and program Download PDF

Info

Publication number
WO2014119555A1
WO2014119555A1 PCT/JP2014/051796 JP2014051796W WO2014119555A1 WO 2014119555 A1 WO2014119555 A1 WO 2014119555A1 JP 2014051796 W JP2014051796 W JP 2014051796W WO 2014119555 A1 WO2014119555 A1 WO 2014119555A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
contour
display
unit
pixel
Prior art date
Application number
PCT/JP2014/051796
Other languages
French (fr)
Japanese (ja)
Inventor
英範 栗林
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2014119555A1 publication Critical patent/WO2014119555A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/376Image reproducers using viewer tracking for tracking left-right translational head movements, i.e. lateral movements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/023Display panel composed of stacked panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present invention relates to an image processing device, a display device, and a program.
  • This application claims priority based on Japanese Patent Application No. 2013-17968 filed on January 31, 2013, and Japanese Patent Application No. 2013-17969 filed on January 31, 2013, the contents of which are incorporated herein by reference. Incorporate.
  • Some image processing apparatuses generate image information to be displayed on a display system capable of three-dimensional display.
  • image information displayed on such a display system that enables three-dimensional display is created exclusively as image information for three-dimensional display.
  • there is a technique for performing three-dimensional display based on image information for two-dimensional display see, for example, Patent Document 1).
  • the positions of a plurality of images viewed from the user may be shifted depending on the definition of the pixels.
  • the accuracy of the stereoscopic image perceived by the user cannot be improved depending on the display method as described above.
  • An object of an aspect of the present invention is to provide an image processing device and a program that improve the visibility of a displayed stereoscopic image. Another object is to provide a display device that can improve the accuracy of a stereoscopic image perceived by a user.
  • a display device includes a first display surface that displays a first image based on first image data including a display target, and a second image based on second image data including the display target.
  • a second display surface to be displayed; a detection unit for detecting a position of an observer observing the first display surface and the second display surface; and an image in the vicinity of the contour portion to be displayed in the second image data.
  • a control unit that corrects data based on the position of the observer detected by the detection unit and displays the corrected data on the second display surface.
  • a program includes a first display surface that displays a first image based on first image data including a display target, and a second image based on second image data including the display target.
  • a computer of a display device comprising: a second display surface for displaying the image; and a detection unit for detecting a position of an observer observing the first display surface and the second display surface.
  • the control step of correcting the image data in the vicinity of the contour portion to be displayed based on the position of the observer detected by the detection unit and displaying the correction on the second display surface is executed.
  • the display device includes a first display surface that displays a first image based on first image data including an object, an observer who observes the first display surface, and the first A detection unit for detecting a relative position with respect to a display surface; and correcting image data in the vicinity of the contour of the object in the first image data based on the relative position detected by the detection unit, A control unit for displaying on the surface.
  • a program includes a first display surface that displays a first image based on first image data including an object, an observer who observes the first display surface, and the first display.
  • a detection unit that detects a relative position with respect to the surface; and a computer of a display device having image data in the vicinity of the contour of the object among the first image data based on the relative position detected by the detection unit. The control step of correcting and displaying on the first display surface is executed.
  • a display device includes a first display unit that displays an image to be displayed at a first depth position, and a plurality of pixels that are two-dimensionally arranged.
  • a second display unit that displays a contour image indicating the contour portion to be displayed at different second depth positions, a position of a contour pixel that displays the contour image among the pixels of the second display unit, and the contour
  • a contour correcting unit that corrects the contour image based on a position on the first display unit of the contour unit corresponding to a pixel and a contour position on the second display unit determined based on a predetermined viewpoint position; .
  • the contour image indicating the position the position of the contour pixel that displays the contour image among the plurality of pixels arranged in a two-dimensional manner of the second display unit, and the first display of the contour portion corresponding to the contour pixel
  • a contour correction unit configured to correct the contour image based on a position on the unit and a contour position on the second display unit determined based on a predetermined viewpoint position.
  • a program for displaying the display target displayed by the second display unit at a second depth position different from the first depth position at which the first display unit displays an image to be displayed Regarding the contour image indicating the contour portion, the position of the contour pixel that displays the contour image among the plurality of pixels arranged in a two-dimensional manner of the second display portion, and the first portion of the contour portion corresponding to the contour pixel.
  • a contour correction step for correcting the contour image is executed based on a position on the first display unit and a contour position on the second display unit determined based on a predetermined viewpoint position.
  • the accuracy of the stereoscopic image perceived by the user can be improved.
  • FIG. 1 is a diagram showing an overview of a display system in the present embodiment.
  • the display system 100 shown in this figure displays an image that allows stereoscopic viewing on the display unit.
  • an XYZ rectangular coordinate system is set, and the positional relationship of each part will be described with reference to this XYZ rectangular coordinate system.
  • a direction in which the display device 10 displays an image is a positive direction of the Z axis, and orthogonal directions on a plane perpendicular to the Z axis direction are an X axis direction and a Y axis direction, respectively.
  • the X-axis direction is the horizontal direction of the display device 10
  • the Y-axis direction is the vertical direction of the display device 10.
  • the observer 1 is at a position where the display surface 11S of the display unit 11 and the display surface 12S of the display unit 12 enter the visual field.
  • the display device 10 shown in FIG. 1 displays a stereoscopic image so that it can be stereoscopically viewed from a predetermined position (Ma) (viewing position) in the positive direction of the Z axis (the direction facing the display unit 12).
  • the observer 1 can stereoscopically view the stereoscopic image displayed on the display unit 11 and the display unit 12 of the display device 10 from the viewing position.
  • FIG. 2 is a configuration diagram illustrating an example of a configuration of the display system 100 including the display device 10 according to the present embodiment.
  • a display system 100 according to the present embodiment includes an image processing device 2 and a display device 10.
  • the image processing device 2 supplies the image information D11 and the image information D12 to the display device 10.
  • the image information D12 is information for displaying the image P12 displayed by the display device 10.
  • the image information D11 is information for displaying the image P11 displayed by the display device 10, and is image information of the edge image PE generated based on the image information D12, for example.
  • the edge image PE is an image showing an edge portion E in the image P12. The edge image PE will be described later with reference to FIGS. 3A and 3B.
  • the display device 10 includes a display unit 11 and a display unit 12.
  • the display device 10 displays an image P 11 based on the image information D 11 acquired from the image processing device 2, and displays the image information D 12 acquired from the image processing device 2. Based on this, the image P12 is displayed.
  • the display unit 11 and the display unit 12 of this embodiment are arranged in the order of the display unit 11 and the display unit 12 in the (+ Z) direction. That is, the display unit 11 and the display unit 12 are arranged at different depth positions. Here, the depth position is a position in the Z-axis direction.
  • the display unit 12 includes a display surface 12S that displays an image in the (+ Z) direction, and displays the image P12 on the display surface 12S based on the image information D12 acquired from the image processing device 2.
  • a light ray (light) R12 emitted from the image P12 displayed on the display surface 12S is visually recognized by the observer 1 as an optical image.
  • the display unit 11 includes a display surface 11S that displays an image in the (+ Z) direction, and displays the image P11 on the display surface 11S based on the image information D11 acquired from the image processing device 2.
  • a light ray (light) R11 emitted from the image P11 displayed on the display surface 11S is visually recognized by the observer 1 as an optical image.
  • the display unit 12 of the present embodiment is a transmissive display unit that can transmit the light beam R11 (light) corresponding to the image P11 displayed by the display unit 11. That is, the display surface 12S displays the image P12 and transmits the light ray R11 of the image P11 displayed by the display unit 11. That is, the display device 10 displays the image P11 and the image P12 so that the viewer 1 can visually recognize the image P11 and the image P12 so as to overlap each other. In this way, the display unit 11 displays the image P11 indicating the edge portion in the image P12 at a depth position different from the depth position where the image P12 is displayed.
  • FIGS. 3A and 3B are schematic diagrams illustrating an example of an image P12 and an image P11 in the present embodiment.
  • FIG. 3A shows an example of the image P12 in the present embodiment.
  • FIG. 3B shows an example of the image P11 in the present embodiment.
  • the image P12 of the present embodiment is an image showing a square pattern as shown in FIG. 3A, for example.
  • the four sides constituting the quadrilateral can be edge portions, but in the following description, for convenience, an edge portion (left side edge portion) E1 indicating the left side of the quadrilateral
  • the edge portion (right side edge portion) E2 indicating the right side will be described as the edge portion E.
  • the image P11 of the present embodiment includes, for example, as shown in FIG. 3B, an edge image (left side edge image) PE1 showing a left side edge part E1 of a square pattern and an edge image (right side edge image) PE2 showing a right side edge part E2.
  • an edge portion (which may be simply expressed as an edge or an edge region) is a portion where the brightness (for example, luminance) of adjacent or neighboring pixels in the image changes suddenly.
  • the edge portion E indicates a theoretical line segment having no width on the left side or the right side of the quadrangle shown in FIG. 3A and, for example, a region around the edge having a finite width corresponding to the resolution of the display unit 11. It also shows.
  • FIG. 4 is a schematic diagram illustrating an example of an image displayed by the display device 10 according to the present embodiment.
  • the display unit 11 displays the image P11 so that the viewer 1 can visually recognize it.
  • the display part 12 displays the image P12 so that the observer 1 can visually recognize it.
  • the image P12 is displayed at a position that is a predetermined distance ZD away from the position at which the image P11 is displayed in the Z-axis direction.
  • the display unit 12 of the present embodiment is a transmissive display unit that transmits light.
  • the image P11 displayed on the display unit 11 and the image P12 displayed on the display unit 12 are visually recognized by the observer 1 so as to overlap.
  • the predetermined distance ZD is a distance between the position where the image P11 is displayed and the position where the image P12 is displayed.
  • the display device 10 corresponds to a left side edge part E1 in the image P12 displayed by the display unit 12 and a left side edge image PE1 corresponding to the edge part.
  • the images P11 and P12 are displayed so as to be visually recognized.
  • the display device 10 displays an image so that the right side edge portion E2 in the image P12 displayed by the display unit 12 and the right side edge image PE2 corresponding to the edge portion are visually recognized.
  • P11 and image P12 are displayed.
  • the display device 10 is connected to the left eye L of the observer 1 on the ( ⁇ X) side of the left edge portion E1 of the quadrangle indicated by the image P12 (that is, outside the quadrilateral).
  • Each image is displayed so that the left side edge image PE1 can be visually recognized. Further, the display device 10 displays the right-side edge portion E2 and the right-side edge image on the left eye L of the observer 1 on the ( ⁇ X) side of the right-side edge portion E2 of the square indicated by the image P12 (that is, on the inside of the square). Each image is displayed so that it overlaps with PE2. Similarly, for example, the display device 10 places the right side edge portion E2 and the right side on the right eye R of the observer 1 on the (+ X) side of the right side edge portion E2 of the square indicated by the image P12 (that is, outside the square). Each image is displayed so that the edge image PE2 overlaps and is visually recognized.
  • the display device 10 has the left-side edge portion E1 and the left-side edge image PE1 on the right eye R of the observer 1 on the (+ X) side of the left-side edge portion E1 of the quadrangle indicated by the image P12 (that is, on the inner side of the quadrangle). Each image is displayed so that can be visually recognized.
  • FIG. 5 is a schematic diagram illustrating an example of the optical image IM in the present embodiment.
  • the optical image IM is an image in which the image P11 and the image P12 are visually recognized by the observer 1.
  • the optical image IM includes an optical image IML visually recognized by the left eye L of the observer 1 and an optical image IMR visually recognized by the right eye R.
  • the optical image IML visually recognized by the left eye L of the observer 1 will be described.
  • an optical image IML formed by combining the image P11L visually recognized by the left eye L and the image P12L visually recognized by the left eye L is formed. For example, as described with reference to FIG.
  • the left side edge portion E1 is placed on the ( ⁇ X) side of the left side edge portion E1 of the quadrangle shown by the image P12 (that is, outside the quadrangle).
  • An optical image IML is formed by combining the image shown and the left-side edge image PE1.
  • the image indicating the right edge portion E2 and the right edge image PE2 are synthesized on the ( ⁇ X) side (that is, inside the rectangle) of the square right edge portion E2 indicated by the image P12.
  • An optical image IML is formed.
  • FIG. 6 shows the brightness distribution of the optical image IML visually recognized by the left eye L in the case of FIG.
  • FIG. 6 is a graph showing an example of the brightness distribution of the optical image IM in the present embodiment.
  • X coordinates X 1 to X 6 are X coordinates corresponding to the brightness change points of the optical image IML.
  • the brightness of the image P12L visually recognized by the left eye L will be described as zero in the X coordinates X 1 to X 2 here.
  • the brightness of the image P12L is the brightness value BR2 at the X coordinates X 2 to X 6 .
  • the case of the brightness value BR will be described as an example of the brightness of the image.
  • the brightness of the image P11L visually recognized by the left eye L is the brightness value BR1 at the X coordinates X 1 to X 2 and the X coordinates X 4 to X 5 , and the X coordinates X 2 to X 4 Is zero. Therefore, the brightness (for example, luminance) of the optical image IML visually recognized by the left eye L becomes the luminance value BR1 at the X coordinates X 1 to X 2 .
  • the brightness of the optical image IML is the luminance value BR2 at the X coordinates X 2 to X 4 and the X coordinates X 5 to X 6 , and the luminance value BR1 and the luminance value BR2 at the X coordinates X 4 to X 5
  • the luminance value BR3 which is the synthesized brightness is obtained.
  • FIG. 7 is a graph showing an example of binocular parallax that occurs in the left eye L and right eye R in the present embodiment.
  • the distribution of brightness of the image recognized by the observer 1 by the optical image IML imaged on the retina of the left eye L is as shown by the waveform WL in FIG.
  • the observer 1 visually recognizes the position on the X-axis where the change in the brightness of the visually recognized image is maximized (that is, the gradient between the waveform WL and the waveform WR is maximized). Recognize that it is an edge part of an object.
  • the observer 1 visually recognizes the position of the X EL shown in FIG. 7 (that is, the position of the distance L EL from the origin O of the X axis) for the waveform WL on the left eye L side. It is recognized as an edge part on the left side of the rectangle.
  • the distribution of brightness of the image recognized by the observer 1 by the optical image IMR synthesized on the retina of the right eye R is as shown by a waveform WR in FIG.
  • the observer 1 visually recognizes the position of the XER shown in FIG. 7 (that is, the position of the distance LER from the origin O of the X axis) for the waveform WR on the right eye R side. Recognize that it is a part.
  • the observer 1 recognizes the position X EL of the edge portion of the square left eye L is viewing, and a position X ER of the edge portion of the square right eye R is visually recognized as binocular parallax.
  • the observer 1 recognizes the quadrilateral image as a stereoscopic image (three-dimensional image) based on the binocular parallax at the edge portion.
  • the display device 10 includes the display unit 12 that displays the image P12 and the edge that indicates the edge portion in the image P12 at a depth position different from the depth position where the image P12 is displayed. And a display unit 11 that displays an image P11 including the image PE. Accordingly, the display device 10 can display the image P12 and the edge image PE (that is, the edge portion) of the image P12 in an overlapping manner. That is, the display device 10 of the present embodiment has no influence on the image other than the edge portion displayed on the display unit 12 without the influence of the image displayed on the display unit 11 (that is, the edge image PE). An image can be displayed.
  • FIG. 8 is a diagram for explaining the influence when the positions of the edges of two images displayed on the display device 10 are shifted.
  • the display unit 11 and the display unit 12 in the display device 10 each have a display capable of displaying a stereoscopic image (3D image).
  • Each display display is provided with a pixel array composed of two-dimensionally arranged pixels on the display surface.
  • the luminance at each position on the display surface 11S is adjusted in units of pixels in the pixel array of the display unit 11.
  • the luminance of each position on the display surface 12 ⁇ / b> S is adjusted in units of pixels in the pixel array of the display unit 12.
  • the definition of the displayable image may be limited by the resolution of the display device.
  • the edge width (line width) is in units of pixels. It can only be adjusted and is limited by the size of the pixel. Further, for example, in order to emphasize the edge of the edge image PE displayed on the display unit 11, the width of the edge may be displayed larger than the size (width) of the pixel.
  • the edge width is limited by the size of the pixel.
  • the edge width corresponds to the width of one pixel.
  • the present invention can be applied to the case where an edge formed by a plurality of pixels is displayed.
  • the description based on the pixel size may be replaced with the description based on the pixel pitch.
  • FIG. 8A shows a front view of the display unit 11 from the side of viewing the display surface 11S.
  • This display surface 11S shows a state in which the right side edge image PE2 indicated by the image P11 displayed on the display unit 11 is displayed.
  • the grid shown in the figure indicates the position of each pixel.
  • Each pixel is arranged at a position corresponding to the grid.
  • the right side edge image PE2 is displayed on the pixel in the kth column on the X axis. In the Z-axis direction,...
  • FIG. A2 shows a state in which a quadrangle Ob indicated by an image P12 displayed on the display unit 12 is displayed on the front view of the display unit 12 from the display surface 12S side.
  • a state in which the right end OR of the quadrangle Ob is in a position that coincides with the right end of the right side edge image PE2 is shown.
  • the right end OR of the quadrangle Ob is at a position corresponding to the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 shown in FIG.
  • FIG. A3 shows an image (composite image) that can be viewed in a state where the right edge image PE2 shown in FIG. A1 and the right edge portion E2 of the quadrangle Ob shown in FIG. A2 overlap.
  • FIG. A4 shows the brightness (luminance) of the part cut out in the horizontal direction (X-axis direction) so as to include the quadrangle Ob in the image (composite image) shown in FIG. A3.
  • an outline is shown in which the luminance (brightness) (V0) of the portion indicating the rectangle Ob before the outline correction is enhanced up to the luminance (brightness) Vp by being synthesized.
  • FIG. 8A in the right side edge portion E2 of the quadrangle Ob shown by the image P12, the right side of the right end OR of the quadrangle Ob is on the ( ⁇ X) side (that is, the inside of the quadrangle).
  • An image when the right end of the edge image PE2 is displayed so as to touch is shown. Since the right edge of the right-side edge image PE2 is in contact with the right edge OR of the quadrangle Ob as described above, the change in the luminance of the image at the right-side edge portion E2 of the quadrangle Ob is large and sharply changed. Can be synthesized.
  • the observer 1 can visually recognize an image whose luminance peak value is increased to Vp (FIG. A4).
  • the image synthesized in this way can be visually recognized when the viewing position of the observer 1 is at a position where the contour can be emphasized most.
  • the visual recognition position where the synthesized image can be visually recognized is set as a predetermined position where the observer 1 can visually recognize the stereoscopic image.
  • FIG. 8B shows a case where the observer 1 moves in the ( ⁇ X) axis direction along the X axis
  • FIG. 8B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis.
  • FIGS. B1 to b4 shown in FIG. 8B and FIGS. C1 to c4 shown in FIG. 8C correspond to the above-described FIGS. A1 to a4, respectively.
  • FIG. 8B and FIG. 8C will be described in order focusing on differences from FIG. 8A.
  • FIG. B2 an image that can be visually recognized will be described with reference to FIG. In FIG. B2, observation is performed by moving the right end OR of the quadrangle Ob in the (+ X) axis direction from the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 illustrated in FIG. Is in the position to be.
  • the right end OR of the quadrangle Ob is at a position where it is observed by moving in the (+ X) axis direction from the right end of the right side edge image PE2.
  • the right end OR of the quadrangle Ob is displayed at a position moved in the (+ X) axial direction from the right end of the right side edge image PE2, so that the position of the edge image PE2 in the right side edge portion E2 of the quadrangle Ob is a quadrangle.
  • the image moved inside Ob is synthesized.
  • an image similar to that in FIG. 8A described above is synthesized with the brightness of the contour image emphasized by the right edge image PE2 and the width in the X-axis direction at the right edge portion E2 of the quadrangle Ob.
  • the luminance of the image at the right edge portion E2 of the quadrangle Ob changes in a staircase pattern.
  • the amount of enhancement as a contour image by the added right-side edge image PE2 is reduced compared to the case of FIG. 8A described above.
  • FIG. C2 the right end OR of the rectangle Ob moves in the ( ⁇ X) axis direction from the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 shown in FIG. In the position to be observed.
  • the right end OR of the quadrangle Ob is at a position where it is observed by moving in the ( ⁇ X) axis direction from the right end of the right side edge image PE2.
  • the right end OR of the quadrangle Ob is displayed at a position moved in the ( ⁇ X) axis direction from the right end of the right side edge image PE2, so that the position of the edge image PE2 in the right side edge portion E2 of the quadrangle Ob is determined.
  • An image that moves in the outward direction of the rectangle Ob and protrudes from the shape of the rectangle Ob is synthesized.
  • an image having the same brightness as that of FIG. 8A described above is synthesized at the right edge portion E2 of the quadrangle Ob with the brightness of the contour image emphasized by the right edge image PE2.
  • the width in the X-axis direction of the above-described range in which the luminance is ensured is narrowed, and the luminance of the image at the right edge portion E2 of the quadrangle Ob changes in a staircase pattern.
  • the amount of enhancement as a contour image by the added right-side edge image PE2 is reduced compared to the case of FIG. 8A described above.
  • the width in the X-axis direction of the range in which the brightness of the contour image is ensured is narrower than in FIGS. 8A and 8B described above, the same right edge as in FIGS. 8A and 8B.
  • the image PE2 is displayed, the enhancement amount as the contour image is reduced as compared with the case of FIGS. 8A and 8B described above.
  • the image shown in FIG. 8A cannot be viewed, and the visibility of the stereoscopic image may be reduced.
  • the factor in the case where the visibility of the stereoscopic image is reduced is different from the factor in the case where the displayed image is deteriorated due to aliasing caused by the pixels of the display device 10 being discretely arranged.
  • the visibility of the stereoscopic image is improved by adjusting the edge image PE according to the method described below.
  • the display system 100 in the present embodiment will be described in detail.
  • FIG. 9 is a schematic block diagram showing the configuration of the display system 100 according to an embodiment of the present invention.
  • a display system 100 illustrated in FIG. 9 includes an image processing device 2 and a display device 10.
  • the display device 10 has a display capable of displaying a stereoscopic image (3D image) for displaying a stereoscopic image so as to be stereoscopically viewed from a predetermined viewing position.
  • the stereoscopic image (3D image) may be any of 3D video (3D moving image) and 3D still image.
  • a stereoscopic image (3D image) is any image of a natural image obtained by photographing an actual subject and an image (computer graphics (CG) image or the like) generated, processed, or synthesized by software processing. Also good.
  • This display is, for example, a liquid crystal display, an organic EL (Electro-Luminescence) display, a plasma display, etc., and is a display capable of displaying a stereoscopic image as described above.
  • the display display of the display device 10 includes a pixel array including pixels arranged two-dimensionally on the display surface.
  • the image processing apparatus 2 includes a contour correction unit 210, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
  • the contour correction unit 210 corrects and outputs at least one of the supplied image information according to the image information supplied to the contour correction unit 210.
  • the image information to be processed by the contour correcting unit 210 includes image information D11P (first image information) and image information D12P (second image information).
  • the image information D11P is displayed on the display unit 11 out of image information for stereoscopically displaying a display target to be displayed on the display unit 11 (first display unit) and the display unit 12 (second display unit) at a predetermined position by binocular parallax. This is image information to be displayed.
  • the image information D12P is image information to be displayed on the display unit 12 among the image information for stereoscopically displaying the display target at a predetermined position by binocular parallax.
  • the contour correction unit 210 in the present embodiment corrects the image information D11P (first image information) to generate image information D11.
  • the contour correction unit 210 outputs the image information D12P as it is without correcting the image information D12P (second image information).
  • the contour correction unit 210 includes a predetermined position (Ma (FIG. 1)) for stereoscopic display of a display target to be displayed on the display unit 11 and the display unit 12 by binocular parallax, and a plurality of the display unit 11 has. Based on the position of the pixel (for example, the position of the pixel arrayed two-dimensionally) and the pixel position on the display unit 11 corresponding to the contour part (for example, the edge part E) to be displayed, The image information corresponding to the contour portion to be displayed is corrected.
  • a predetermined position Mo (FIG. 1)
  • the predetermined position where the display target to be displayed on the display unit 11 and the display unit 12 is stereoscopically displayed by binocular parallax is a position where the observer 1 can visually recognize a stereoscopic image.
  • the positions of the plurality of pixels arrayed two-dimensionally included in the display unit 11 are the positions of the pixels in the pixel array including the pixels arrayed two-dimensionally provided in the display provided on the display surface 11S.
  • the pixel position on the display unit 11 corresponding to the display target contour is a position at which the display target contour is displayed on the display unit 11.
  • the contour correcting unit 210 corrects the image information corresponding to the contour portion to be displayed according to the image information D12P (second image information) when correcting the image information D11P.
  • the image information D12P is image information to be displayed on the display unit 12 among image information for stereoscopically displaying a display target at a predetermined position by binocular parallax.
  • the contour correction unit 210 has a relative relationship between the pixel position in the display unit 11 corresponding to the display target contour unit and the positions of the plurality of pixels arranged in the display unit 11 in the display target contour unit.
  • Image information corresponding to the contour portion to be displayed in the image information D11P is corrected so as to reduce the reduction in the visibility of the stereoscopic display caused by the positional relationship.
  • the contour correction unit 210 corrects the luminance of the contour portion to be displayed in the correction of the image information D11P included in the contour portion to be displayed.
  • the contour correcting unit 210 corrects the width of the contour portion to be displayed in the correction of the image information D11P included in the contour portion to be displayed. Details of the process of correcting the image information corresponding to the contour portion of the display target in the contour portion of the display target will be described later so as to reduce the reduction in the visibility of the stereoscopic display.
  • the contour correction unit 210 in the present embodiment will be described with a more specific example.
  • the contour correction unit 210 in the present embodiment includes a determination unit 213 and a correction unit 211.
  • the determination unit 213 performs determination for determining a condition for controlling correction processing in the correction unit 211 described later. For example, in the correction of the image information D11P included in the contour portion to be displayed, the determination unit 213 has the position of the contour portion to be displayed on the display portion 11 that is adjacent to the first pixel adjacent to the display portion 11. It is determined whether or not it falls within the range of the second pixel.
  • the determination unit 213 determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11 falls within the range of the first pixel and the second pixel adjacent to each other among the pixels of the display unit 11. It is determined that correction is necessary, and it is determined that correction is not necessary when the range of the adjacent first pixel and second pixel among the pixels of the display unit 11 is not applied.
  • the adjacent first pixel and second pixel are pixels arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
  • the correction unit 211 displays a display target to be displayed on the display device 10 when it is determined that the position of the contour of the display target is within the range of the first pixel and the second pixel based on the determination result in the determination unit 213. Correction of the contour portion of.
  • the correction unit 211 performs correction, for example, the image information D11P to be displayed on at least one of the first pixel and the second pixel is arranged in a two-dimensional manner with a predetermined position and the display unit 11 The correction is performed based on the positions of the plurality of pixels and the pixel positions on the display unit 11 corresponding to the contour portion to be displayed.
  • the correction unit 211 corrects the image information D11P displayed on the first pixel and the second pixel according to the correction amount of the image information D11P determined by pairing the first pixel and the second pixel. Thereby, the contour correction unit 210 sets the image information D11P so that the position of the observer 1 (user) who views the display unit 11 and the display unit 12 is a predetermined position where the display target can be stereoscopically viewed by binocular parallax. It can be corrected.
  • the imaging unit 230 images the direction including the above-described viewing position on the display surface side displayed by the display device 10. That is, the imaging unit 230 captures the direction facing the display surface of the display device 10.
  • the imaging unit 230 is not shown, but an optical lens, an imaging device that captures a subject light beam (optical image) input via the optical lens, and imaging data captured by the imaging device are digital images.
  • An imaging signal processing unit that outputs data. Then, the imaging unit 230 supplies the captured image (digital image data) to the detection unit 250 and the control unit 260.
  • the detection unit 250 includes an observer detection unit 251 (detection unit), a face detection unit 252, and a face recognition unit 253.
  • the observer detection unit 251 detects the relative position of the observer on the display surface side displayed by the display device 10 with respect to the display device 10 based on the image captured by the imaging unit 230. That is, the observer detection unit 251 detects the relative position of the observer with respect to the display device 10 in the direction facing the display device 10 based on the image captured by the imaging unit 230.
  • the observer detection unit 251 is a surface parallel to the display surface of the display device 10 based on the position of the image region of the observer's face detected by the face detection unit 252 with respect to the image region captured by the imaging unit 230.
  • An observer's position (X-axis and Y-axis coordinates) in (XY plane) is detected.
  • the observer detection unit 251 also determines the distance (the distance in the Z-axis direction) from the display device 10 to the observer with respect to the display surface of the display device 10 based on the image area of the observer's face detected by the face detection unit 252. , Z-axis coordinates). For example, the observer detection unit 251 determines the size of the image area of the observer's face detected by the face detection unit 252 or the interval between the left eye image area and the right eye image area in the face image area. Based on the above, the distance (distance in the Z-axis direction, coordinates in the Z-axis) to the display surface of the display device 10 from the display device 10 to the observer is detected. Note that the observer detection unit 251 may detect an observer detected from an image captured by the imaging unit 230 based on human characteristics other than the face (for example, body shape characteristics).
  • the face detection unit 252 detects a face image area from the image captured by the imaging unit 230.
  • the face detection unit 252 extracts image information indicating the outline of the face and the position of the eyes from the image captured by the imaging unit 230, and the extracted image information and information indicating the characteristics of the human face (face And the image area of the face in the direction facing the display device 10, that is, the face of the observer is detected.
  • “detecting a face image area” is also simply referred to as “detecting an observer's face”.
  • the above-mentioned information indicating the characteristics of the human face is stored in the storage unit 270 as a face detection database used for detecting a face from an image, for example. Then, the face detection unit 252 supplies the detection result to the observer detection unit 251 or the face recognition unit 253.
  • the face detection unit 252 fails to detect a face from the image captured by the imaging unit 230, the face detection unit 252 observes a detection result indicating that there is no face in the direction facing the display device 10, that is, there is no observer. To the person detection unit 251 or the face recognition unit 253.
  • the face recognition unit 253 recognizes the face of the observer detected by the face detection unit 252 based on the image captured by the imaging unit 230. For example, the face recognition unit 253 detects the detected face based on the detection result of the face detection unit 252 (information indicating the outline of the face, the position of the eyes, etc.) and the information indicating the characteristics of the faces of a plurality of people. Which observer's face is recognized.
  • the information indicating the facial features of each of the plurality of persons described above is stored in the storage unit 270 as a face recognition database used to recognize a face extracted from an image, for example.
  • the face recognition database may be configured to arbitrarily add information.
  • the display system 100 is provided with a menu for registering information indicating the features for recognizing the observer's face, and by executing this menu, the imaging unit 230 images the face of the observer to be registered, and the image is captured.
  • Information for recognizing the face indicating the characteristics of the face of the observer to be registered based on the face image may be generated and registered in the face recognition database.
  • the storage unit 270 stores information necessary for the detection unit 250 to detect.
  • the storage unit 270 stores, as a face detection database, information indicating human face characteristics necessary for detecting a face from an image.
  • the storage unit 270 stores, as a face recognition database, information indicating facial features of each of a plurality of persons necessary to recognize a face extracted from an image.
  • the control unit 260 includes a difference calculation unit 261, a contour correction control unit 262, and a display control unit 263.
  • the difference calculation unit 261 calculates a difference (positional difference) between the relative position of the observer detected by the observer detection unit 251 and the visual recognition position where the stereoscopic image displayed on the display device 10 can be stereoscopically viewed. .
  • the difference calculation unit 261 calculates a difference between positions detected by coordinates on the X, Y, and Z axes of the detected relative position of the observer and the visual recognition position.
  • the contour correction control unit 262 displays correction information for correcting the contour of an image to be displayed so that a stereoscopically viewable image can be displayed from the viewing position of the viewer 1 based on the image captured by the image capturing unit 230. 210.
  • the contour correction control unit 262 performs a predetermined stereoscopic display of the display target to be displayed on the display unit 11 and the display unit 12 based on the relative position of the observer 1 detected by the observer detection unit 251.
  • the position information indicating the position of is generated.
  • the contour correction control unit 262 supplies the generated position information to the contour correction unit 210, and the correction information for correcting the contour of the image displayed on the display device 10 is contoured in the contour correction unit 210 based on the generated position information.
  • the correction unit 210 generates the data.
  • the relative position of the observer may be described as the position of the observer.
  • the display control unit 263 controls the display of the display device 10.
  • the display control unit 263 causes the display device 10 to display a stereoscopic image based on the input image signal.
  • FIG. 10 is a diagram illustrating a positional relationship between an observer, a display device, and a contour portion in a target image.
  • the position of the observer in FIG. 10 is based on information detected as position information of the observer 1 detected by the observer detection unit 251 as described above.
  • FIG. 10A shows a view of the display device 10 viewed from the display surface side ((+ Z) axis side).
  • FIG. 10B shows a plan view of the XZ plane from the (+ Y) axis side.
  • the positional relationship between the edge portion E in the image P12 and the edge image PE in the image P11 will be described in a simplified manner so that the positional relationship can be clearly shown.
  • the edge portion E in the image P12 can be visually recognized by overlapping the edge image PE in the image P11 from the position of Ma where the observer 1 is present.
  • the positional relationship in this case is such that the image P11 and the image P12 are arranged at positions where stereoscopic viewing is easy.
  • the position of one point (point POS2) representing the edge portion E in the image P12 and the position of one point (point POS1) on the image P11 with respect to the same point (point POS2) match on the synthesized image.
  • the position of the observer 1 is a position that is separated by a distance LD in the + Z-axis direction from the surface of the display device 10 (display unit 12) on the Z-axis. Further, the position of the observer 1 is indicated by Ma (0, 0, LD), the position of the point POS2 is indicated by (X2, Y2, 0), and the position of the point POS1 is indicated by (X1, Y1, -ZD).
  • Ma (0, 0, LD) point POS2, and point POS1 in a straight line
  • the point POS2 and the point POS1 can be visually recognized. Due to such a positional relationship, the observer 1 located at Ma (0, 0, LD) can visually recognize a stereoscopic image formed by the image P11 and the image P12.
  • POS1 does not overlap point POS2.
  • the position of the observer 1 moves from Ma to Mb, the position of the point POS1 moves to the point POS1 ′.
  • the position of the point POS1 moves to the point POS1 ′′.
  • the position of the point POS1 can be moved in units of one pixel in the display unit 11. .
  • the amount of movement of the position of the point POS1 is not necessarily the amount of movement in units of the width of one pixel in the display unit 11 in the X-axis direction. Since the position where the edge image PE is displayed depends on the position of the pixel on the display unit 11, the position where the edge image PE can be displayed may not necessarily correspond to the position of the point POS1 calculated as described above.
  • the point POS1 does not overlap the point POS2
  • the display is performed so as to overlap with the limit of the resolution of the display unit 11. There are cases where it cannot be done.
  • the control unit 260 increases the movement amount from (X1, Y1, -ZD). Accordingly, the contour correcting unit 210 is controlled to correct the contour image of the image P11. Note that, from the positional relationship shown in FIG. 10B, the display unit 11 of the image displayed on the display unit 11 is based on the movement amount of the observer 1 by the proportional calculation based on the similarity relationship of the triangles. The amount of movement on the display surface 11S can be calculated.
  • FIG. 11 is a diagram for explaining the correction of the contour image in the display system 100.
  • the image processing apparatus is configured so as to reduce the reduction in the visibility due to the above-described deviation by spreading an area having the same brightness (brightness) as the brightness (brightness) of the area indicating the outline in the outline image. 2 corrects the contour image.
  • an example will be shown and the details will be described.
  • FIG. 11 shows an image of the right side edge portion E2 of the quadrangle Ob indicated by the image P12 as in FIG. 8A described above.
  • FIG. 11A to FIG. 4A arranged in order from the upper side in FIG. 11A are the same as FIG. 8A to FIG. 4A in FIG. 8A.
  • FIG. 11B shows a case where the observer 1 moves in the ( ⁇ X) axis direction along the X axis
  • FIG. 11B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis. 11 (c).
  • FIGS. B1 to b4 shown in FIG. 11B and FIGS. C1 to c4 shown in FIG. 11C correspond to the above-described FIGS. A1 to a4.
  • FIG. 11B and FIG. 11C will be described in order centering on the difference from FIG.
  • the width of the edge portion E (right edge portion E2) of the quadrangle Ob in the image P12 is the width of the pixel in the display unit 11 in the X-axis direction (d (FIG. 10)).
  • the right end OR of the rectangle Ob is in the (+ X) axis direction from the boundary between the pixel in the k-th column (column k) and the pixel in the (k + 1) -th column ((k + 1) -th column) shown in FIG. It is in a position to be observed by moving.
  • the right end OR of the quadrangle Ob is located at such a position, the right-side edge portion E2 of the quadrangle Ob is applied to the pixels in the adjacent column in the X-axis direction.
  • the pixels in the columns adjacent to each other in the X-axis direction are shown as a pixel in the kth column and a pixel in the (k + 1) th column shown in FIG.
  • the contour correction unit 210 generates a right side edge image PE2 ′ from the right side edge image PE2 indicated by the image information D11P.
  • the contour correction unit 210 according to the present embodiment arranges the right side edge image PE2 ′ side by side with respect to the right side edge image PE2 so as to correct pixels in columns adjacent in the X-axis direction according to the right side edge image PE2.
  • the contour correcting unit 210 arranges the right side edge image PE2 ′ in the (k + 1) column side by side with respect to the right side edge image PE2 located in the k column.
  • FIG. 11B shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are the same. The case of adjusting the luminance of the pixels indicating the right side edge image PE2 and the right side edge image PE2 ′ will be described later.
  • the contour correcting unit 210 detects that the right end OR of the quadrangle Ob is displayed at a position moved in the (+ X) axis direction from the right end of the right side edge image PE2. 'Is displayed. Thereby, the image corrected by the edge image PE2 and the right-side edge image PE2 'in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. B3 and b4). As a result of this correction, the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 11A in FIG. A contour image is synthesized (FIG. B4).
  • the right end OR of the rectangle Ob is from the boundary between the k-th pixel (k column) and the (k + 1) -th pixel ((k + 1) column) displaying the right-side edge image PE2 shown in FIG. It is in a position to be observed by moving in the ( ⁇ X) axial direction.
  • the right edge portion E2 of the quadrangle Ob is shown as the pixel in the k-th column and the pixel in the (k ⁇ 1) -th column shown in FIG. This is a situation where the pixels on the adjacent columns in the direction are applied.
  • the contour correction unit 210 generates a right side edge image PE2 ′ from the right side edge image PE2 indicated by the image information D11P.
  • the contour correction unit 210 according to the present embodiment arranges the right side edge image PE2 ′ side by side with respect to the right side edge image PE2 so as to correct pixels in columns adjacent in the X-axis direction according to the right side edge image PE2.
  • the contour correcting unit 210 arranges the right side edge image PE2 ′ in the (k ⁇ 1) column side by side with respect to the right side edge image PE2 located in the k column.
  • FIG. 11C shows an example in which the luminance of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ is the same. The case of adjusting the luminance of the pixels indicating the right side edge image PE2 and the right side edge image PE2 ′ will be described later.
  • the contour correcting unit 210 detects that the right end OR of the quadrangle Ob is displayed at a position moved in the ( ⁇ X) axis direction from the right end of the right side edge image PE2. 'Is displayed. Thereby, the image corrected by the edge image PE2 and the right-side edge image PE2 'in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. C3 and c4). As a result of this correction, the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 11A in FIG. A contour image is synthesized (FIG. C4).
  • the right end OR of the quadrangle Ob is moved in the ( ⁇ X) axis direction from the right end of the right side edge image PE2, the right end OR of the quadrangle Ob is obtained by the presence of the right side edge image PE2 and the right side edge image PE2 ′.
  • the right side edge image PE2 ' is arranged on the ( ⁇ X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
  • the brightness peak value Vp of the contour image is in any case compared with the synthesized images shown in FIGS. 8B and 8C.
  • the width indicating the peak luminance value Vp of the contour image that continues to the position of the right end OR of the quadrangle Ob can be secured wider than the pixel width. It is different from (b) and (c).
  • the width indicating the luminance peak value of the synthesized image in the vicinity of the right end OR of the quadrangle Ob the position of the end recognized as the edge of the contour image can be matched with the right end OR of the quadrangle Ob.
  • the display system 100 can show an outline in which the width of the outline indicated by the peak value Vp is wider than the width of the pixel, with the end aligned with the right end OR.
  • the display system 100 synthesizes the contour image in which the brightness of the edge portion is enhanced by adding the right-side edge image PE2 ′ to the right-side edge image PE2 to enhance the contour image.
  • the display system 100 can reduce the influence of the shift of the edge position caused by the observer 1 moving from a predetermined position.
  • the display system 100 according to the present embodiment can improve the visibility of a stereoscopic image even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • FIG. 12 is a diagram for explaining a modification of the contour image correction method in the display system 100.
  • the above-described deviation is allowed by arranging an area in which the brightness (luminance) is reduced from the brightness (luminance) in the outline area of the outline image, continuously with the outline area before correction. It can be so. Details will be described below.
  • FIG. 12 shows an image of the right edge portion E2 of the quadrangle Ob indicated by the image P12 as in the above-described FIG. 8A.
  • FIGS. A1 to a4 arranged in order from the upper side in FIG. 12A are the same as FIGS. A1 to a4 in FIG. 8A.
  • FIG. 12B shows a case where the observer 1 moves in the ( ⁇ X) axis direction along the X axis
  • FIG. 12B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis. 12 (c).
  • FIGS. B1 to b4 shown in FIG. 12B and FIGS. C1 to c4 shown in FIG. 12C correspond to the above-described FIGS. A1 to a4, respectively.
  • FIG. 12B and FIG. 12C will be described in order centering on the difference from FIG.
  • the width of the edge portion E (right edge portion E2) of the quadrangle Ob in the image P12 is set as the width of the pixel in the display unit 11 in the X-axis direction.
  • the position where the contour image is corrected in FIG. 12B is performed at the same position as the position where the contour image is corrected in FIG.
  • the contour correcting unit 210 arranges the right-side edge image PE2 'in the (k + 1) column side by side with respect to the right-side edge image PE2 located in the k column.
  • FIG. 12B shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are different from those of FIG. 11B described above.
  • a specific difference is that when the right-side edge image PE2 ′ is generated from the right-side edge image PE2 indicated by the image information D11P for the pixels in the (k + 1) -th column, as shown in FIG.
  • the brightness (luminance) of the right side edge image PE2 ′ is different.
  • the luminance of the right side edge image PE2 ′ is set to be lower than the luminance of the right side edge image PE2 as a base.
  • the correction of the contour image in this modification is different in this respect from the correction of the contour image shown as an example of the first embodiment.
  • the right edge image PE2 ′ is displayed by detecting that the right edge OR of the quadrangle Ob is displayed at a position moved in the (+ X) axis direction from the right edge of the right edge image PE2.
  • the image corrected by the edge image PE2 and the right-side edge image PE2 ′ in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. B3 and b4).
  • the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, so that the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 12A in FIG.
  • a contour image is synthesized (FIG. B4).
  • the right side edge image PE2 and the right side edge image PE2 ′ provide the right edge OR of the quadrangle Ob. Contour images continue to the position.
  • the luminance peak values of the contour image in the right side edge image PE2 and the right side edge image PE2 ′ are different. Therefore, the peak brightness value of the contour image in the right edge image PE2 ′ section is lower than the peak brightness value (Vp) of the contour image in the right edge image PE2 section.
  • the right side edge image PE2 ′ is arranged on the (+ X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
  • the width indicating the peak value Vp of the brightness of the contour image is changed to the pixel width in FIG. Although it is the same as b), it is different from FIG. 8B in that the contour image continues to the position of the right end OR of the quadrangle Ob.
  • the width indicating the peak value Vp is set at that position. A delicate contour with the width of the pixel can be shown.
  • the contour correcting unit 210 arranges the right side edge image PE2 'in the (k-1) column side by side with respect to the right side edge image PE2 located in the k column.
  • FIG. 12C shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are different from those of FIG. 11C described above.
  • a specific difference is that, as shown in FIG. A1 in FIG. 12C, the right-side edge image PE2 ′ is generated from the right-side edge image PE2 indicated by the image information D11P at the pixel in the (k ⁇ 1) -th column. In this case, the brightness (luminance) of the right side edge image PE2 ′ is different.
  • the luminance of the right side edge image PE2 ′ is set to be lower than the luminance of the right side edge image PE2 as a base. Note that this is different from the correction of the contour image shown as an example of the first embodiment.
  • the right side edge image PE2 ′ is displayed.
  • the image corrected by the edge image PE2 and the right-side edge image PE2 ′ in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. C3 and c4).
  • the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 12A in FIG.
  • a contour image is synthesized (FIG. C4).
  • the right end OR of the quadrangle Ob is moved in the ( ⁇ X) axis direction from the right end of the right side edge image PE2, the right end OR of the quadrangle Ob is obtained by the presence of the right side edge image PE2 and the right side edge image PE2 ′.
  • the contour image continues to the position of.
  • the peak value of the brightness of the contour image is different between the edge image PE2 and the right-side edge image PE2 ′. Therefore, the peak brightness value of the contour image in the right edge image PE2 ′ section is lower than the peak brightness value of the contour image in the edge image PE2 section.
  • the right side edge image PE2 ′ is arranged on the ( ⁇ X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
  • the width indicating the luminance peak value Vp of the contour image is the same as that in FIG. 8C when compared with the synthesized image shown in FIG. Further, the point that the contour image is corrected inside the rectangle Ob is different from FIG. In short, the width recognized as the contour image can be widened by increasing the brightness value of the composite image in the region inside the rectangle Ob from the contour image in the region of the contour image of the luminance peak value Vp. .
  • the contour image by the added right-side edge image PE2 can be enhanced as compared with FIG. 8 described above.
  • the visibility of the stereoscopic image can be improved.
  • FIG. 13 is a diagram illustrating the adjustment of the brightness (luminance) of the edge image PE ′ as a modification of the contour image correction method in the display system 100 of the present embodiment.
  • the upper part shows the relative positional relationship between the images P11 and P12 displayed on the display device 10 (display unit 11, display unit 12 (FIG. 1)) of the observer 1.
  • the brightness (luminance) of the contour image displayed on the display unit 11 is shown in the middle.
  • the result of combining the images displayed on the display device 10 is shown in the lower part.
  • the situation in which the display of the contour changes according to the amount of movement that the observer 1 has moved from the predetermined position (Ma (FIG. 10)) is shown in order from FIG. 13 (a) to FIG. 13 (e). .
  • FIG. 13 (a) shows a case where the position of the observer 1 coincides with a predetermined position (Ma (FIG. 10)), and corresponds to the above-described FIG. 11 (a) and FIG. 12 (a).
  • FIG. 13B shows a first stage in which the position of the observer 1 has moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axial direction.
  • FIG. 13B and FIG. It corresponds to c). That is, the contour correction unit 210 generates an edge image PE1 '(PE2') in which the luminance is lower than the luminance of the edge image PE1 (PE2).
  • FIG. 13 (c) shows a second stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction. It corresponds to (c). That is, the contour correction unit 210 generates an edge image PE1 '(PE2') having the same luminance as that of the edge image PE1 (PE2).
  • FIG. 13D shows a third stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • the brightness of the generated edge image PE1 ′ (PE2 ′) is adjusted while maintaining the brightness of the edge image PE1 (PE2) at the initial value (Vp).
  • the luminance of the edge image PE1 (PE2) is adjusted while the luminance of the edge image PE1 ′ (PE2 ′) is held at the initial value (Vp).
  • the contour correction unit 210 reduces the luminance of the edge image PE1 (PE2) while maintaining the luminance of the edge image PE1 ′ (PE2 ′) at the same value (Vp) as in the second stage. ing.
  • FIG. 13 (e) shows a fourth stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • the movement amount of the observer 1 is increased, and the edge image PE1 ′ (PE2 ′) generated as the correction information is displayed next to the pixel that displayed the edge image PE1 (PE2) in FIG.
  • the edge image PE1 '(PE2') is displayed and the edge image PE1 (PE2) is not displayed.
  • a region for adjusting the luminance of the edge image PE1 (PE2) and a region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are determined according to the movement amount of the observer 1. Yes.
  • the correction amount of a contour image can be continuously adjusted according to the movement of the observer 1.
  • the visibility of the stereoscopic image can be improved.
  • FIG. 14 is a diagram illustrating adjustment of the brightness (luminance) of the edge image PE ′ as a modification of the contour image correction method in the display system 100 of the present embodiment.
  • the upper part shows the relative positional relationship between the images P11 and P12 displayed on the display device 10 (display unit 11, display unit 12 (FIG. 1)) of the observer 1.
  • the brightness (luminance) of the contour image displayed on the display unit 11 is shown in the middle.
  • the result of combining the images displayed on the display device 10 is shown in the lower part.
  • the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are set according to the movement amount of the observer 1.
  • segments was illustrated.
  • the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are divided according to the movement amount of the observer 1.
  • An embodiment in which the brightness is adjusted according to the movement amount of the observer 1 without any change will be described below.
  • coefficients k1 and k2 whose values change according to the movement amount of the observer 1 are determined.
  • the coefficient k1 changes from 1 to 0 as the movement amount of the observer 1 increases, and the coefficient k2 changes from 0 to 1 as the movement amount of the observer 1 increases. It shall be.
  • the brightness k of the edge image PE1 PE2 is multiplied by the coefficient k1 to set the brightness of the edge image PE1 (PE2), thereby setting the edge image PE1 (PE2).
  • the luminance of the edge image PE1 gradually decreases and the luminance of the edge image PE1 ′ (PE2 ′) gradually increases as the movement amount of the observer 1 increases. To rise.
  • the value of the coefficient k1 and the value of the coefficient k2 may be added to be 1.
  • the range in which the observer 1 moves can be determined according to the pixel width in the display unit 11.
  • the luminance of the edge image PE1 (PE2) and the luminance of the edge image PE1 ′ (PE2 ′) may be set to the same luminance. it can.
  • the set luminance value can be made half the luminance value of the original edge image PE1 (PE2).
  • FIG. 14 are arranged in the order of FIG. 14 (a) to FIG. 14 (e) in accordance with the amount of movement that the observer 1 has moved from a predetermined position (Ma (FIG. 10)).
  • FIG. 14 (a) shows a case where the position of the observer 1 coincides with a predetermined position (Ma (FIG. 10)), which is shown in FIGS. 11 (a), 12 (a) and 13 (a). Equivalent to.
  • FIG. 14B shows a first stage in which the position of the observer 1 has moved from a predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • a predetermined position Mo (FIG. 10)
  • the coefficients k1 and k2 there is a relationship of 0 ⁇ k2 ⁇ k1 ⁇ 1. That is, the luminance value of the edge image PE1 (PE2) is reduced from the initial luminance value (Vs) of the edge image PE1 (PE2) to V1.
  • the luminance value of the edge image PE1 generated from the edge image PE1 (PE2) '(PE2' ), to the initial edge image PE1 (PE2) V 3 which greatly reduces the luminance value (Vs) of.
  • FIG. 14C shows a second stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • FIG. 14D shows a third stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • the coefficients k1 and k2 there is a relationship of 0 ⁇ k1 ⁇ k2 ⁇ 1. That is, the value of the luminance of the edge image PE1 (PE2), significantly reduce the brightness values of the original edge image PE1 (PE2) (Vs) to V 3.
  • the brightness of the edge image PE1 edge image PE1 produced from (PE2) '(PE2') , the V 1 was reduced from a luminance value (Vs) of the original edge image PE1 (PE2).
  • FIG. 14E shows a fourth stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • the movement amount of the observer 1 is increased, and the edge image PE1 ′ (PE2 ′) generated as the correction information is displayed next to the pixel that displayed the edge image PE1 (PE2) in FIG.
  • the edge image PE1 '(PE2') is displayed and the edge image PE1 (PE2) is not displayed.
  • the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are respectively set according to the movement amount of the observer 1. I decided to adjust the brightness. Thereby, although it is a simplified method, the correction amount of a contour image can be continuously adjusted according to the movement of the observer 1. Furthermore, according to the contour image correction method in the present modification, the luminance in the edge image PE1 (PE2) and the edge image PE1 ′ (PE2 ′) is reduced according to the movement amount of the observer 1.
  • the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • FIG. 15A to 15C are diagrams showing an overview of the display system in the present embodiment.
  • the display system 100A shown in this figure displays an image that allows stereoscopic viewing on the display unit.
  • FIG. 15A is a schematic diagram in which a part of a cross section of the display device 10A in the display system 100A is enlarged.
  • FIG. 15B shows the positional relationship between the display device 10 ⁇ / b> A and the observer 1.
  • FIG. 15C shows the arrangement of pixels on the display surface 11S of the display unit 11A. Even when the observer 1 moves within a predetermined range from the illustrated position, the display device 10A displays the image to be displayed so that it can be viewed stereoscopically.
  • the display unit 11A and the display unit 12 of the display device 10A correspond to the display unit 11 and the display unit 12 of the display device 10 (FIGS. 1 and 2) in the first embodiment, and as in the case of the display device 10, Arranged at different depth positions.
  • the display device 10A similarly to the display device 10 described above, an image to be displayed on the display unit 11A is transmitted through the image to be displayed on the display unit 12, and a display target is displayed.
  • the display device 10A in which the display unit 11A and the display unit 12 are combined displays a stereoscopic image by optically combining the images displayed on the respective display units.
  • the optically synthesized stereoscopic image becomes a stereoscopic image in which binocular parallax generated in the left eye L and the right eye R as shown in FIG.
  • the display system 100A displays the edge of the target image displayed on the display unit 12 on the display unit 11A, thereby displaying the image displayed on the display device 10A in a three-dimensional manner.
  • the display unit 11A of the present embodiment has a single display that can display a stereoscopic image (3D image) displayed so as to be stereoscopically viewed from a predetermined viewing position. Even a three-dimensional image can be displayed.
  • a display method for displaying a stereoscopic image that can be stereoscopically viewed with the naked eye without using special glasses different images that cause binocular parallax are displayed so as to be seen by the left eye and the right eye, respectively. There is a method.
  • a parallax barrier in which a parallax barrier (parallax barrier) is arranged in front of the display surface as shown in FIG. Examples include methods.
  • the lenticular lens method can easily increase the amount of light reaching the eyes (left eye (L) and right eye (R)) when emitting light with the same amount of light emission as compared to other methods such as a parallax barrier method. .
  • the lenticular lens method is a method suitable for the display unit 11A of the present embodiment for further displaying the image through the display unit 12, and an example in which the lenticular lens method is applied to the display unit 11A. Shown in 15A.
  • a sheet-like lenticular lens 13 is provided on the display surface 11S of the display unit 11A.
  • the lenticular lens 13 is provided with a plurality of convex lenses (for example, cylindrical lenses) that have a curvature in one direction and do not have a curvature in a direction orthogonal to the one direction in a direction orthogonal to the extending direction.
  • the extending direction of the convex lens is the vertical direction (Y-axis direction).
  • a plurality of rectangular display areas are provided along the extending direction (Y-axis direction) of the convex lens so as to correspond to the convex lenses in the lenticular lens 13.
  • a plurality of display regions R1, R2, R3, R4, and R5 that display images for the right eye are displayed on the plurality of display regions, and a display for the left eye.
  • a plurality of display areas L1, L2, L3, L4, and L5 for displaying images are assigned so as to be alternately arranged.
  • the observer 1 is within a range in which a pair of right-eye images (for example, images displayed in the display area R1) and left-eye images (for example, images displayed in the display area L1) can be observed corresponding to the convex lens.
  • the display on the display unit 11A can be observed as a stereoscopic image. As shown in FIG.
  • the display area 11S of the display unit 11A has display areas in the order of display areas L1, R1, L2, R2, L3, R3, L4, R4, L5, R5,. It is arranged.
  • a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each display area.
  • the pixel PICL1, the pixel PICR1, and the pixel PICL2 are provided in order along the X-axis direction.
  • the pair of the pixel PICL1 and the pixel PICR1 is a pair that allows the observer 1 to visually recognize a stereoscopic image.
  • each pixel in the display unit 11A may be any pixel as long as it can be handled as a single pixel, and each pixel may further include a plurality of sub-pixels.
  • the sub-pixels included in each pixel may be provided according to three colors (RGB).
  • RGB three colors
  • the most suitable position for observing the stereoscopic image to be displayed is discretely arranged like the position of the observer 1 illustrated in FIG. 15B.
  • a predetermined range based on a position most suitable for observing the stereoscopic image to be displayed is an area where the stereoscopic image is easily observed.
  • FIG. 16 is a schematic block diagram showing the configuration of a display system 100A according to an embodiment of the present invention.
  • a display system 100A illustrated in FIG. 16 includes an image processing device 2A and a display device 10A.
  • the display device 10A in the present embodiment includes a display unit 11A and a display unit 12.
  • the image processing device 2A generates a contour image for displaying a stereoscopic image on the display device 10A.
  • the image processing apparatus 2A includes a contour correction unit 210A, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
  • the contour correcting unit 210A in the present embodiment corrects and outputs at least one of the supplied image information.
  • the contour correction unit 210A supplies, for example, image information D11 obtained by correcting the image information D11P to the display device 10A.
  • the contour correction unit 210A in the present embodiment includes a determination unit 213 and a correction unit 211A.
  • the correction unit 211A corresponds to the above-described correction unit 211 (FIG. 9).
  • the correction unit 211A of the present embodiment generates image information D11 of an image to be displayed for each of the left eye (L) and the right eye (R) based on the image information D11P.
  • image information D11 of an image displayed on the display unit 11A for the left eye (L) and the right eye (R), respectively, is referred to as image information D11L and image information D11R.
  • the contour correction unit 210A applies the contour image correction method described in the example and the modifications in the first embodiment described above, and corresponds to the image information D12P. Generated as a contour image.
  • the pixel of the display unit 11 is read as a pixel of the display unit 11A in the present embodiment or a rectangular display region discretely arranged as the display unit 11A.
  • the contour image is corrected based on the detected position of the observer 1 is exemplified.
  • the contour correcting unit 210A of the present embodiment replaces the position of the observer 1 and the position of the left eye (L) and the right eye (R) of the observer 1 estimated from the detected position of the observer 1.
  • the contour image is corrected based on the position of.
  • the contour correcting unit 210A corrects the contour image based on the detected position of the left eye (L) and right eye (R) of the viewer 1 instead of the position of the viewer 1.
  • the contour correction unit 210A generates image information D11L from the image information D11P with reference to the position of the left eye (L) of the observer 1, and displays the image on the display unit 11A for the left eye (L). Display as. Thereby, the image observed in the left eye (L) is an image obtained by optically combining the image based on the image information D11L and the image based on the image information D12.
  • the contour correction unit 210A generates image information D11R from the image information D11P with reference to the position of the right eye (R) of the observer 1 and displays the image information D11R for the right eye (R) on the display unit 11A. Display as.
  • the image observed in the right eye (R) is an image obtained by optically combining the image based on the image information D11R and the image based on the image information D12.
  • the images to be displayed for the left eye (L) and the right eye (R) are generated independently.
  • the images are formed by optically synthesized images. The result is as shown in FIG.
  • an optical image IMR formed by combining the image P11R visually recognized by the right eye R and the image P12R visually recognized by the right eye R is formed.
  • the images to be displayed for the left eye (L) and the right eye (R) are generated independently, and the left eye (L) and the right eye (R)
  • an image that can be easily viewed stereoscopically is displayed.
  • the observer 1 can observe a stereoscopic image with a more stereoscopic effect.
  • the contour image is corrected as shown in the first embodiment, so that 2 is displayed on the display unit 11A.
  • the display target to be displayed on the display unit 11A (first display unit) and the display unit 12 (second display unit) is stereoscopically displayed at a predetermined position by binocular parallax.
  • image information D11 first image information
  • the contour correction unit 210A is based on the predetermined position, the positions of a plurality of pixels arranged in a two-dimensional manner in the display unit 11A, and the pixel position in the display unit 11A corresponding to the contour part to be displayed.
  • the image information D11 the image information corresponding to the contour portion to be displayed is corrected.
  • the predetermined position is set to, for example, each position of both eyes of the observer 1. In addition, it is good also as a single position which represents the observer 1 like the above-mentioned 1st Embodiment, without making said predetermined position into each position of both eyes of the observer 1.
  • a multi-lens type lenticular method can be applied to the display unit 11A.
  • a stereoscopic image can be observed from a plurality of directions in which the display device 10A is viewed.
  • the display unit 11A displays the stereoscopic image so that the stereoscopic image can be observed from a position corresponding to the front of the display unit 11A and a position deviated from the front. In this case, even if the observer 1 moves from a position in front of the display unit 11A, a contour image that can continuously observe a stereoscopic image suitable for observation from those directions is displayed on the display unit 11A. It is good to display.
  • the contour correcting unit 210A causes the display unit 11A to display a contour image that displays a stereoscopic image suitable for the direction in which the stereoscopic image can be observed.
  • the contour correction unit 210A displays, in each direction, image information D11 that displays a stereoscopic image suitable for the direction in which the stereoscopic image can be observed based on the supplied image information (image information D11P, image information D12P). Generate and output a corresponding contour image.
  • the contour correction unit 210A applies, for example, the contour image correction method as described in the first embodiment, and is suitable for the direction in which the stereoscopic image determined by the lenticular lens 13 can be observed.
  • a contour image for displaying the stereoscopic image is output.
  • the contour correction unit 210A outputs a contour image that displays a three-dimensional image suitable for the direction in which the three-dimensional image is observable by the contour image correction method as described in the first embodiment. May be.
  • the contour correction unit 210A uses the pixel indicating the contour when viewed from the front of the display unit 11A as a reference, and the luminance of the image displayed on the pixel adjacent to the pixel indicating the contour when viewed from the front. It adjusts according to the brightness
  • the contour correcting unit 210A generates a contour image that continuously changes the contour image to be displayed in accordance with the movement of the observer 1, whereby the stereoscopic image displayed on the display unit 11A is moved by the observer 1. Can be changed gradually and continuously according to the movement, and the stereoscopic image can be changed and displayed in accordance with the movement.
  • the display system 1 can display a contour image that allows the display target to be stereoscopically viewed from the position of the observer 1A according to the stereoscopically visible direction of the display unit 11A. Thereby, the range which can observe a stereo image from the observer 1 can be expanded.
  • the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • FIG. 17A to 17C are diagrams showing an outline of the display system in the present embodiment.
  • the display system 100B shown in this figure displays an image that allows stereoscopic viewing on the display unit.
  • FIG. 17A shows a state where the viewer 1 is located in a stereoscopically viewable range by enlarging a part of the cross section of the display device 10B in the display system 100B.
  • FIG. 17B shows the positional relationship between the display device 10 ⁇ / b> B and the observer 1.
  • FIG. 17C shows an arrangement of pixels on the display surface 10S of the display device 10B. Since the viewer 1 is illustrated, the viewer can stereoscopically view the image displayed on the display device 10B even if the viewer 1 moves a predetermined range from the position.
  • the display device 10B displays a stereoscopic image (3D image) so that a stereoscopic image can be stereoscopically viewed from a predetermined viewing position even when used alone without being combined with another display device.
  • a stereoscopic image 3D image
  • a sheet-like lenticular lens 13 is provided on the display surface 10S of the display device 10B shown in FIG. 17A.
  • the lenticular lens 13 has a curvature in one direction, and a plurality of convex lenses (for example, cylindrical lenses) that do not have a curvature in a direction orthogonal to the one direction are aligned in the direction orthogonal to the extension direction. Is provided.
  • the extending direction of the convex lens is the vertical direction (Y-axis direction).
  • the display surface 10S of the display device 10B is provided with a plurality of rectangular display areas corresponding to the convex lenses in the lenticular lens 13 along the extending direction (Y-axis direction) of the convex lenses.
  • a plurality of display regions R1, R2, R3, R4, and R5 that display images for the right eye are displayed on the plurality of display regions, and a display for the left eye.
  • a plurality of display areas L1, L2, L3, L4, and L5 for displaying images are allocated.
  • the display device 10B displays a stereoscopic image using the parallax generated by the lenticular lens 13.
  • the display device 10B of the present embodiment includes, for example, a lenticular lens display (display unit 11B and display unit 12B).
  • the display unit 11 ⁇ / b> B is provided with a display area for displaying an image to be presented to one of the eyes dispersed in a plurality of display areas.
  • the display unit 12B is also provided with a display area for displaying an image to be presented to the other eye of both eyes dispersed in a plurality of display areas. In the case of FIG.
  • the display unit 11B is provided in the display regions R1, R2, R3, R4, and R5 (first display region), and the display unit 12B includes the display regions L1, L2, L3, L4, L5 (second display area) is provided.
  • the arrangement of the display unit 11B and the display unit 12B in the display device 10B is different from the display device 10 (FIGS. 1 and 2) in the first embodiment and is arranged along the same surface (display surface 10S).
  • the display areas on the display surface 10S of the display device 10B are display areas L1, R1, L2, R2, L3, R3, L4, R4, L5, R5,. It is arranged.
  • a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each display area.
  • the pixel PICL1, the pixel PICR1, and the pixel PICL2 are provided in order along the X-axis direction.
  • the pair of the pixel PICL1 and the pixel PICR1 is a pair that allows the observer 1 to visually recognize a stereoscopic image.
  • each pixel in the display unit 11A may be any pixel as long as it can be handled as a single pixel, and each pixel may further include a plurality of sub-pixels.
  • the sub pixels included in each pixel may be provided according to three colors (RGB).
  • FIG. 18 is a schematic block diagram showing the configuration of a display system 100B according to an embodiment of the present invention.
  • a display system 100B illustrated in FIG. 18 includes an image processing device 2B and a display device 10B.
  • the image processing device 2B generates a contour image for displaying a stereoscopic image on the display device 10B.
  • the image processing device 2B includes a contour correction unit 210B, a stereoscopic image generation unit 220B, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
  • the stereoscopic image generation unit 220B uses the image information D11P and the image information D12P such that the display target displayed on the display unit 11B and the display unit 12B can be stereoscopically viewed from a predetermined position by binocular parallax. Is generated.
  • the predetermined position is a position where the visual recognition position of the observer 1 can emphasize the outline most, and the observer 1 can visually recognize the stereoscopic image. Accordingly, the stereoscopic image generation unit 220B generates image information D11P and image information D12P in which the position of the contour portion to be displayed displayed by the image information D11P is adjusted according to the predetermined position.
  • the stereoscopic image generation unit 220B receives the supply of the image information D11S for display on the display unit 11B and the display unit 12B, and the display target can be stereoscopically viewed from a predetermined position based on the image information D11S. Such image information D11P and image information D12P are generated.
  • the stereoscopic image generation unit 220B is image information in which the position of the contour portion to be displayed is adjusted according to a predetermined position, and includes the lenticular lens 13 as an optical unit that generates binocular parallax.
  • the first image information to be displayed on the display unit 11B may be generated.
  • the contour correction unit 210B in the present embodiment includes a determination unit 213B and a correction unit 211B.
  • the determination unit 213B determines that the position of the contour portion to be displayed on the display portion 11B is the pixel of the display portion 11B. It is determined whether or not it falls within the range between the adjacent first pixel and second pixel.
  • the first and second pixels adjacent to each other in the display unit 11B are pixels provided in adjacent columns in the display unit 11B.
  • the determination unit 213B determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11B falls within the range of the first and second pixels adjacent to each other among the pixels of the display unit 11B. It is determined that correction is necessary, and it is determined that correction is not necessary when the range of the adjacent first pixel and second pixel among the pixels of the display unit 11 is not applied.
  • the adjacent first display area and second display area are areas arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
  • the determination unit 213B has the position of the contour portion to be displayed displayed on the display portion 11B as a first pixel adjacent to the pixel of the display portion 11B. It is determined whether or not it falls within the range of the second pixel.
  • the determination unit 213B determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11B falls within the range of the first and second pixels adjacent to each other among the pixels of the display unit 11B.
  • the adjacent first pixel and second pixel are pixels (display areas) arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
  • the correction unit 211B displays a display target to be displayed on the display device 10B when it is determined that the position of the contour of the display target is within the range between the first pixel and the second pixel based on the determination result in the determination unit 213. Correction of the contour portion of.
  • image information D11P to be displayed on at least one of the first pixel and the second pixel is arranged in a two-dimensional manner with a predetermined position and the display unit 11 The correction is performed based on the positions of the plurality of pixels and the pixel position (display area position) in the display unit 11 corresponding to the contour part to be displayed.
  • the correction unit 211B corrects the image information D11P to be displayed on either the first pixel or the second pixel according to the correction amount of the image information D11P determined by pairing the first pixel and the second pixel.
  • the correction unit 211B corrects the image information D12P to be displayed on either the first pixel or the second pixel according to the correction amount of the image information D12P determined by pairing the first pixel and the second pixel.
  • the contour correction unit 210B has the image information D11P so that the position of the observer 1 (user) who views the display unit 11B and the display unit 12B becomes a predetermined position where the display target can be stereoscopically viewed by binocular parallax.
  • the image information D12P can be corrected.
  • the contour correction unit 210B corrects and outputs at least one of the supplied image information according to the image information supplied to the contour correction unit 210B.
  • the image information to be processed by the contour correcting unit 210 includes image information D11P (first image information) and image information D12P (second image information).
  • the image information D11P includes the display unit 11 and the display unit 11 among the image information that stereoscopically displays a display target to be displayed on the display unit 11 (first display unit) and the display unit 12 (second display unit) at a predetermined position by binocular parallax. This is image information to be displayed on any one of the display unit 12.
  • the image information D12P is image information that is displayed on either the display unit 11 or the display unit 12 among the image information for stereoscopically displaying the display target at a predetermined position by binocular parallax.
  • the contour correction unit 210B in the present embodiment corrects the image information D11P (first image information) to generate the image information D11, and the image information D12P (second image information).
  • Image information D12 is generated by correcting (image information).
  • the contour correction unit 210B corrects the image information D11P (first image information) to generate image information D12, and corrects the image information D12P (second image information).
  • the outline correction unit 210B switches the correspondence between the image information D11P (first image information) and the image information D12P (second image information) and the image information D11 and the image information D12 according to the display form. Output.
  • the correction method in the first embodiment can be applied.
  • the contour correction unit 210B in the present embodiment may perform correction by applying a plurality of correction methods according to the movement amount of the observer 1 as described below.
  • FIG. 19 is a diagram for explaining a correction method in a case where the observer moves from a region where a stereoscopic image determined according to the optical characteristics of the lenticular lens 13 can be observed to a region outside the region.
  • the display unit 11B is indicated by a column S1
  • the display unit 12B is indicated by a column S2.
  • FIG. 19 shows the display device 10B and the observer 1 (1 ′) at different positions in the X-axis direction.
  • Region Z1 indicates a range in which the column S1 of the display unit 11B can be observed
  • region Z2 indicates a range in which the column S2 of the display unit 12B can be observed.
  • the left eye of the observer 1 is located in the region Z1 (or region Z3), the right eye is located in the region Z2 (or region Z4), and the left eye is displayed on the display unit 11B (column S1).
  • the observer 1 can observe a stereoscopic image.
  • the left eye of the observer 1 is located in an area outside the area Z1 (or area Z3), or the right eye is located in an area outside the area Z2 (or area Z4).
  • the observer 1 cannot observe the stereoscopic image.
  • the stereoscopic image is observed while the position of the observer 1 moves from Ma (i) to Ma (i + 1). There are areas that cannot be done.
  • the display system 100B performs a process in which a part of the area in which the stereoscopic image existing in the first display form cannot be observed is changed to an area in which the stereoscopic image can be observed.
  • a part of the regions in which the stereoscopic image existing in the first display form cannot be observed can be observed as a stereoscopic image when the following condition is satisfied.
  • the position of the observer 1 is located at Ma ′ (i)
  • the left eye of the observer 1 is located inside the area Z2 and the right eye is located inside the area Z3.
  • the display according to the second display form is a display form in which the positions for displaying the images presented to the left eye and the right eye of the observer 1 are reversed with respect to the display according to the first display form.
  • a part of the region in which the stereoscopic image cannot be observed based on the first display state can be changed based on the second display state. With this, an area where a stereoscopic image can be observed can be made.
  • the contour correction unit 210B may switch the display form.
  • FIG. 20 is a diagram illustrating a correction method when the observer moves.
  • a stereoscopic image displayed on the display device 10B until the position of the observer 1 shown in FIG. 19 is from Ma (i) to Ma ′ (i) will be described.
  • FIG. 20A schematically shows a stereoscopic image observed when the observer 1 observes the target image from the position of Ma (i).
  • the diagrams from FIG. 20A to FIG. 20D move along the X axis from the position of Ma (i) to Ma ′ (i) while the observer 1 observes the display device 10B.
  • the three-dimensional image observed in the case of doing is shown typically.
  • the upper part shows the positional relationship between the display device 10B and the observer 1 as shown in FIG. From the top to the bottom, the brightness of the edges (contour image PE1 and contour image PE2) of the image presented to the left eye to be corrected according to the position of the observer 1 and the brightness of the image information D11P (D12P) are sequentially displayed from the top. The brightness of the image presented to the left eye and the brightness of the image presented to the right eye are shown.
  • the correction amount of the contour image PE1 and the contour image PE2 is calculated by the contour correction unit 210B based on each of the image information D11 and the image information D12.
  • the description until the image information D11 is obtained from the image information D11P will be representative, and only the result of the image information D12 will be shown.
  • the brightness of the image presented to the left eye is calculated by adding the brightness of the contour image PE1 and the contour image PE2 generated based on the image information D11P to the brightness of the image information D11P.
  • the luminance of the image information D11 corresponds to an image that causes the left eye of the observer 1 to perceive brightness IML as shown in FIG.
  • an image in which the edge portions of the object (the left side edge portion E1 and the right side edge portion E2) are emphasized is generated and displayed on the display unit 11B as the image information D11.
  • the luminance of the image presented to the right eye is calculated based on the image information D12P.
  • the images observed with both eyes each cause binocular parallax with the luminance or width of the edge portion corrected.
  • the image perceived by the observer 1 is such that the brightness of the edge portion is different as in the image information D11 and the image information D12 shown in FIG. In FIG.
  • the observer 1 is located at Ma (i), and the edge parts (left edge part E1 and right edge part E2) of the rectangular object displayed substantially in front of the observer 1 are shown. ), An image in which a contour image PE1 and a contour image PE2 for correcting the contour are generated is displayed. For example, an image corrected so that the brightness of the contour image PE1 and the brightness of the contour image PE2 in the correction of the edge portion are equal to each other is displayed.
  • the display form in FIG. 20A is, for example, the first display form described above with reference to FIG.
  • the position of the observer 1 moves in the X-axis direction (Ma ′ (i) direction) to the position as shown in FIG. 20B, the direction in which the object 1 is expected to be seen by the observer 1 is as shown in FIG.
  • the image is observed to the left from the viewer 1 as compared to the direction to be viewed.
  • the position of the observer 1 in the case shown in FIG. 20B is in the range of half or less from Ma (i) to Ma ′ (i), and is half of the position from Ma (i) to Ma ′ (i). It is assumed that it is within a predetermined range near the position. Thus, even if the observer 1 moves relatively little, the direction in which the object is viewed changes.
  • the correction amounts of the contour image PE1 and the contour image PE2 are adjusted according to the movement amount of the observer 1.
  • the observer 1 moves in the X-axis direction from the reference position (Ma)
  • the brightness of the edge portion in the same direction as the direction in which the observer 1 moves is displayed higher.
  • the above-described various methods can be applied to adjust the correction amounts of the contour image PE1 and the contour image PE2 according to the movement amount of the observer 1.
  • a case where the correction method as shown in FIG. 13 is applied will be described.
  • a stereoscopic image is generated that allows the observer 1 to recognize that the object has moved in the same direction as the movement direction of the observer 1 without changing the position where the object is displayed on the display device 10B. can do.
  • the display form in FIG. 20B is maintained in the first display form (FIG. 19) as in the case of FIG.
  • the observer 1 When the position of the observer 1 is further moved in the X-axis direction from the position shown in FIG. 20B and displayed by the same display method up to the above-described FIG. 20B, it is shown in FIG. As described above, it reaches an area where the stereoscopic image cannot be observed. In short, in the case shown in FIG. 20B, the observer 1 is inside the region that can be corrected from the image when Ma (i) at a position suitable for observing the stereoscopic image is used as a reference. The state where the observer 1 is located near the limit point when moving in the X-axis direction is shown.
  • the display form in the display device 10B is displayed as the second display as shown in FIG. Switch to form.
  • the position where the display form is switched is set to a half position from Ma (i) to Ma ′ (i).
  • the display form up to FIG. 20B is the first display form
  • the display form after switching is the second display form.
  • FIG. 20C shows a state immediately after switching the display form.
  • the display mode is switched to the second display mode (FIG. 19), so that the position is separated from Ma (i) from the half position from Ma (i) to Ma ′ (i).
  • the area is an area where a stereoscopic image can be observed.
  • the display form in FIG. 20B is maintained in the first display form (FIG. 19) as in the case of FIG. While the position of the observer 1 further moves in the X-axis direction and reaches the position of Ma ′ (i) shown in FIG.
  • the viewer 1 can A stereoscopic image displayed on the display device 10B can be observed while moving the range.
  • the correction shown in FIG. 20 has been described as calculating the correction amount according to the movement amount of the observer 1, but in the detection of the movement amount of the observer 1, the observer is based on a predetermined representative value.
  • the amount of calculation required for this correction processing is reduced by using a detection value that approximates the amount of movement of 1 with a discrete value or approximating the correction amount of each contour image with a discrete value. Can do. For example, from Ma (i) shown in FIG. 20 (a) to Ma ′ (i) shown in FIG. 20 (d), the number of images that can be displayed according to the position of the observer 1 is several. By limiting, it is possible to reduce the calculation load of the correction calculation process according to the movement of the observer 1.
  • FIG. 21 is a diagram illustrating a modification of the display device 10B in the display system 100B.
  • FIG. 21 shows an enlarged part of a cross section of the display device 10B.
  • the display device 10B is assumed to be provided with a binocular lenticular lens type display.
  • the display device 10B in this modification shown in FIG. A lenticular lens type display is provided.
  • a stereoscopic image is displayed on the display device 10B so that stereoscopic viewing is possible from the viewed angle. Images captured from a plurality of angles corresponding to angles viewed from each of a plurality of viewing positions (viewing regions) are respectively displayed for a plurality of viewpoints.
  • Such a display method is sometimes called an integral method (multi-view method).
  • the multi-lens type lenticular lens type display unit 11B and display unit 12B further divide the regions of the columns S1 and S2 in the display unit 11B and the display unit 12B shown in FIG. 19 into a plurality of columns.
  • the column S1 of the display unit 11B is divided into five columns (S11, S12, S13, S14, S15), and the column S2 of the display unit 12B is divided into five columns (S21, S22, S23, S24, S25).
  • the display surface 10S of the display device 10B is arranged in the order of columns S11, S12, S12, S14, S15, S21, S22, S23, S24, S25,.
  • a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each column. As a result, stereoscopic images viewed from five directions can be displayed respectively.
  • the stereoscopic image displayed on the display device 10B can be observed from the position where it is most easily observed.
  • FIG. 22 is a diagram illustrating a method of correcting an image when observed from three directions.
  • the directions Da0, Db0, and Dc0 indicate directions in which the stereoscopic image is most easily observed.
  • Three stereoscopic images based on the direction are prepared.
  • the direction in which the viewer 1 looks at the display device 10B can be calculated from the relative positional relationship with the display device 10 based on the result of detecting the viewer 1.
  • OBJ1 and OBJ2 An object close to the observer 1 is indicated by reference numeral OBJ1, and an object far from the observer 1 is indicated by reference numeral OBJ2.
  • Each of the stereoscopic images observed from the directions Da0, Db0, and Dc0 has an emphasized outline as an outline image for enabling stereoscopic viewing.
  • the following description will be made with the amount of contours in the stereoscopic image observed from the directions Da0, Db0, and Dc0 as equal amounts.
  • a part of the shape of the object OBJ2 is shielded by the object OBJ1 and can be observed.
  • the above three cases will be described in order.
  • the direction observed from the positive direction side of the X axis with respect to the direction Da0 is indicated as the direction Da1 with reference to the stereoscopic image that can be observed from the direction Da0, and the direction observed from the negative direction side of the X axis with respect to the direction Da0.
  • the direction is defined as Da2.
  • the directions Db1, Db2 and the directions Dc1, Dc2 are determined based on the direction Db0.
  • the shielded range is observed wider than in the case of the direction Da0 compared to the case of observing from the direction Da0.
  • the shielded range is observed to be narrower than when observed from the direction Da0 compared to the direction Da0.
  • the display device 10B can present an image in a limited direction, but it is difficult to present an image in which the observation direction is continuously changed. Therefore, when observing from a direction in which an image as an image to be presented in a limited direction is not prepared, an image in a representative direction prepared to present the image in a limited direction is used as a basis. Correct the image and display it. The correction method will be described below. A left-side edge image PE1 added to the left side of the object and a right-side edge image PE2 added to the right side are added to the stereoscopic image of the object OBJ1 indicated by the symbol A0 as the edge image PE.
  • the left side edge image PE1 ′ added to the left side of the object and the right side edge image PE2 ′ added to the right side are added to the stereoscopic image of the object OBJ2 as the edge image PE.
  • the luminance of the left side edge image PE1 is increased and the luminance of the right side edge image PE2 is decreased.
  • the luminance of the left side edge image PE1 ′ is lowered and the luminance of the right side edge image PE2 ′ is increased.
  • the object OBJ1 and the object OBJ2 are perceived by the observer 1 because the region where the objects OBJ2 appear to overlap is widened.
  • the observer 1 can feel as if viewed from the direction of Da1.
  • the luminance of the left side edge image PE1 is lowered and the luminance of the right side edge image PE2 is increased.
  • the luminance of the left side edge image PE1 ′ is increased and the luminance of the right side edge image PE2 ′ is decreased.
  • symbol B0, B1, B2 is demonstrated.
  • the shielded range is observed more widely than in the direction Db0.
  • the shielded range is observed narrower than in the direction Db0. This tendency is the same as that in the direction Da0 described above.
  • corrected images can be generated in the same manner as the diagrams referenced by the aforementioned symbols A0, A1, and A2.
  • symbol C0, C1, C2 is demonstrated.
  • the object OBJ1 and the object OBJ2 observed from the direction Dc0 do not have a region that shields each other.
  • the distance between the object OBJ1 and the object OBJ2 is observed to be narrower than in the direction Dc0.
  • the shielded range is observed wider than in the direction Dc0.
  • corrected images are generated in the same manner as the diagrams referenced by the symbols A0, A1, and A2.
  • the distance between the object OBJ1 and the object OBJ2 is narrowed and perceived by the observer 1.
  • the interval between the object OBJ1 and the object OBJ2 is widened and perceived by the observer 1.
  • the observer 1 can make a relative correction with respect to the object in the same manner as in the case of observing the actual object only by slightly correcting the displayed image when the viewing direction of the display device 10B changes.
  • the observer 1 can recognize that the positional relationship has changed.
  • the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • FIG. 23 is a schematic block diagram showing a configuration of a display system 100C according to an embodiment of the present invention.
  • a display system 100 ⁇ / b> C illustrated in FIG. 23 includes an image processing device 2 ⁇ / b> C and a display device 10.
  • the image processing apparatus 2C has a feature of generating a contour image.
  • the image processing apparatus 2C includes a contour correction unit 210, a stereoscopic image generation unit 220C, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
  • the contour correcting unit 210 in the present embodiment corrects at least one of the supplied image information according to the image information (image information D11P, image information D12P) supplied from the stereoscopic image generating unit 220C. Output.
  • the stereoscopic image generation unit 220C generates image information D11P that allows the display target to be viewed stereoscopically from a predetermined position by binocular parallax, based on the display target displayed on the display unit 11 and the display unit 12.
  • the predetermined position is a position where the viewer's 1 visual recognition position can emphasize the outline most, and the observer 1 can visually recognize the stereoscopic image.
  • the stereoscopic image generation unit 220C generates image information D11P in which the position of the contour portion to be displayed displayed by the image information D11P is adjusted according to the predetermined position. For example, the stereoscopic image generation unit 220C receives the supply of the image information D11S to be displayed on the display unit 11 and the display unit 12, and the display target can be stereoscopically viewed from a predetermined position based on the image information D11S. Such image information D11P is generated. At this time, the stereoscopic image generation unit 220C outputs image information D12P based on the image information D11S.
  • the stereoscopic image generation unit 220C may generate image information D12P that allows the display target to be viewed stereoscopically from a predetermined position based on the image information D11S. More specifically, the stereoscopic image generation unit 220C sets, as the image information D11P, an image for adding the edge image PE to the image information D11S based on the image information D11S. The stereoscopic image generation unit 220C generates image information D11P from the image information D11S based on the positional relationship between the position of the observer 1 and the display device 10.
  • the stereoscopic image generation unit 220C generates the image information D12P based on the image information D11S, and the magnification of the image information D11P based on the image information D11S is determined based on the positional relationship between the position of the observer 1 and the display device 10. Image information D11P is output according to the display position.
  • the stereoscopic image generation unit 220C may generate the image information D11P that displays the display target displayed by the perspective method as an image rotated with reference to the virtual axis according to the predetermined position.
  • the stereoscopic image generation unit 220C receives supply of information (D11S) to be displayed on the display unit 11 and the display unit 12, and based on the supplied information (D11S), the stereoscopic image generation unit 220C determines the display target by binocular parallax.
  • Image information D11P is generated so that the display target can be viewed stereoscopically from a predetermined position.
  • the process of displaying the display target as an image rotated with respect to the axis can be applied to information having three-dimensional information. Details of the generation method will be described later.
  • the stereoscopic image generating unit 220C deforms the shape of the display target displayed based on the image information D11P according to the displacement of the predetermined position, thereby setting the position of the contour part of the display target.
  • Information D11P may be generated.
  • the stereoscopic image generation unit 220C displays image information for displaying a display target by transmitting one image through the other image among the image P11 displayed on the display unit 11 and the image P12 displayed on the display unit 12.
  • the position of the contour portion to be displayed may be adjusted according to the predetermined position.
  • the display unit 11 is arranged at a distance from the display unit 12 in the normal direction ( ⁇ Z direction) of the display unit 12.
  • the display unit 12 is a transmissive display unit.
  • the stereoscopic image generation unit 220C displays the contour of the display target according to the predetermined position so that the image P11 displayed on the display unit 11 is transmitted through the image P12 displayed on the display unit 12 and displayed.
  • Image information D11P is generated as image information whose position has been adjusted.
  • the stereoscopic image generation unit 220C generates the image information D11P to be displayed on the display unit 11 that is arranged at a distance from the display unit 12 in the normal direction ( ⁇ Z direction) of the display unit 12.
  • the three-dimensional image generation unit 220C includes information indicating a contour portion to be displayed from the image information D12P for displaying an image on the display unit 12 among the image information displayed in a three-dimensional manner at the predetermined position.
  • the extracted information is generated as image information D11P.
  • the stereoscopic image generating unit 220C extracts feature points for displaying a stereoscopic image from the base image information (image information D11S), and generates a stereoscopic image for emphasizing the extracted feature points to display a stereoscopic image.
  • the image information D11S based on the stereoscopic image generation unit 220C may be any of a still image, a CG image, and a moving image.
  • the feature points for performing the stereoscopic image display extracted from the base image information D11S can be optimized according to the base image information D11S.
  • the stereoscopic image generation unit 220C can select processing corresponding to the main subject based on information associated with the image or predetermined information.
  • the image processing apparatus 2C (stereoscopic image generation unit 220C) extracts a person or a focused subject as a feature point for displaying a stereoscopic image.
  • a known method can be applied to a method for extracting a person from a base image and a method for extracting a focused subject from a base image.
  • the image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information based on the extracted feature points (such as a person) so as to stereoscopically display the main subject corresponding to the feature points.
  • the base image is a still image
  • the image processing device 2C extracts the main subject as a feature point for performing stereoscopic image display.
  • the information indicating the main subject is indicated by, for example, meta information associated with the image.
  • the image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information so that the extracted main subject is stereoscopically displayed.
  • the base image is a still image
  • the image processing apparatus 2C determines whether or not there is a dividing line at a position assumed based on the golden ratio. And feature points can be detected based on the determination result.
  • the image processing apparatus 2C may extract feature points according to the arrangement in the screen.
  • a setting such as lowering the priority of extraction from the one arranged at the center is performed.
  • the priority in this way, it is possible to set the position where the feature points are extracted so as not to be biased by reducing the probability that the feature points are extracted from the corners of the screen.
  • the image processing apparatus 2C stereooscopic image generation unit 220C
  • the image processing apparatus 2 ⁇ / b> C may extract the distant scenery as a feature point preferentially when it is determined that the base image is the main subject.
  • the image processing apparatus 2 ⁇ / b> C may extract the feature point preferentially in the foreground subject when the subject in focus on the near side can be detected even if the image is the main subject in the landscape. . This selection may be switched by setting.
  • the image processing device 2C can extract the main subject based on the difference information of the plurality of images.
  • the image processing apparatus 2C can extract a subject having a large amount of movement in the screen or a subject photographed in a plurality of continuous images as a feature point.
  • the process of extracting a moving subject (moving amount, direction) showing a feature different from the feature that the background moves from an image photographed in a plurality of consecutive images is performed by “frame There are known methods for applying “difference between” and vector processing of the movement amount.
  • the image processing apparatus 2C (stereoscopic image generation unit 220C) specifies a main subject based on the extracted subject and the feature points of the subject, and generates stereoscopic image information so that the specified main subject is stereoscopically displayed.
  • the main subject can be extracted from the focal length information based on the history information at the time of shooting and the distance information to the subject.
  • the image processing apparatus 2C uses the focal length information and the distance information based on the history information at the time of shooting to extract the subject at the focused position at the time of shooting as the main subject.
  • the image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information so that the extracted main subject is stereoscopically displayed.
  • CG image computer graphics image
  • information in the depth direction (depth MAP) about the display target can be created using the information of the 3D model.
  • a display target can be extracted based on the depth direction information (depth MAP), or the direction in which an object is displayed on the display device 10 can be set.
  • FIG. 24 is a diagram illustrating a process of displaying a display object displayed by perspective in a pseudo-rotation manner.
  • FIGS. 24A and 24B show the rectangular parallelepiped in perspective (two-point perspective).
  • FIG. 24A shows a rectangular parallelepiped when FP1 and FP2 are defined as vanishing points, that is, a rectangular parallelepiped having the vertices (QA, QB, QC, QD, QE, QF, QF, QG) illustrated. (However, vertices hidden in the body are not included.)
  • FIG. 24A For example, a case will be described in which an image of the rectangular parallelepiped shown in FIG. 24A is virtually generated when viewed from a position closer to the front side of the surface composed of QA-QB-QF-QG than the current viewpoint.
  • the viewpoint position When the viewpoint position is moved as described above, there is a place where a rectangular parallelepiped can be visually recognized as shown in FIG.
  • FIG. 24B In the conversion process from FIG. 24A to the diagram shown in FIG. 24B, it is obtained as if the coordinate system including the rectangular parallelepiped is rotated about the rotation axis RA shown in FIG. it can.
  • the stereoscopic image generation unit 220C uses an image obtained by rotating a rectangular parallelepiped (display target) displayed by the perspective method with reference to the rotation axis RA (virtual axis) according to the predetermined position, as shown in FIG.
  • an image of a rectangular parallelepiped viewed from various directions can be obtained by calculation processing.
  • the movement of the position of the observer 1 may be detected and rotated in conjunction with the detected amount of movement with respect to the rectangular parallelepiped displayed on the display device 10. In this way, by interlocking with the movement amount (movement) of the observer 1, the image to be displayed can be rotated without using any special input means.
  • the shape of the rectangular parallelepiped to be displayed is deformed in accordance with the rotation linked to the movement amount (movement) of the observer 1.
  • the stereoscopic image generation unit 220C In order to stereoscopically display an image viewed from an arbitrary direction, the stereoscopic image generation unit 220C generates a contour image in which input image information and the position of a rectangular parallelepiped contour portion to be displayed are set according to display conditions. Thereby, even when the display condition is changed, the responsiveness until the stereoscopic image is displayed can be shortened.
  • various images such as a still image, a moving image, or a CG image can be displayed, and an object to be displayed in an emphasized manner according to the characteristics of the various images.
  • the main subject it is possible to enhance the stereoscopic expression power in the generated stereoscopic image information.
  • the contour correcting unit 210 in the image processing apparatus 2C corrects the contour image based on the image information D11P and image information D12P (stereoscopic image information) generated by the stereoscopic image generating unit 220C.
  • the contour correction unit 210 corrects the position / direction in which the contour image is arranged and the brightness balance based on the representative position of the user.
  • the contour correcting unit 210 sets the brightness of the contour portion according to the amount of movement of the position of the contour image obtained by calculation. For the correction processing performed by the contour correction unit 210, the methods described in the above embodiments can be applied.
  • FIG. 25 is a flowchart illustrating processing performed by the image processing apparatus 2C.
  • the detection unit 250 detects the position of the observer 1 based on image information obtained by the imaging unit 230 imaging so that the observer 1 is included in the imaging range (step S10). In the detection of the position of the observer 1, the detection unit 250 performs based on image information captured by the imaging unit 230.
  • the stereoscopic image generation unit 220C generates a contour image of a stereoscopic image that can be stereoscopically viewed from the observer 1 based on the detected position of the observer 1 (step S20).
  • the position of the observer 1 is the position detected by the detection unit 250 in step S10.
  • the stereoscopic image generation unit 220C generates a contour image in at least one image information indicating an image that is visually recognized as a superimposed image so that the stereoscopic image can be visually recognized from the detected position of the observer 1.
  • the stereoscopic image generation unit 220C generates at least image information (contour image) D11P.
  • the contour correction unit 210 corrects the contour portion of the generated contour image based on the detected position of the observer 1 (step S30). For example, the generated contour image is at least the contour image D11P. The contour correction unit 210 corrects at least the contour portion of the contour image D11P to generate image information D11.
  • the control unit 260 displays the corrected contour image on the display unit 11 and the display unit 12 in the display device 10 (step S40). For example, the control unit 260 causes the display unit 11 in the display device 10 to display the image information D11 including the corrected contour image.
  • the image processing device 2C can correct the contour of the image information displayed on the display device 10. Thereby, even if it is a case where the observer 1 moves from the predetermined
  • the outline image which shows only an outline was illustrated and demonstrated as one Example shown in 1st Embodiment, and the outline image D11P in each modification
  • generation of the outline image D11P, and The new image information synthesized based on the image information of the above-described contour image D11P may be used as the contour image.
  • the image information newly obtained by synthesizing as described above corresponds to the original image in which the contour is emphasized.
  • the display system 100C shown in the present embodiment the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • the display system 100 (100A, 100B, 100C) shown in the present embodiment detects the position of the moving observer 1 (user) and displays a stereoscopic image corresponding to the position of the observer 1 (user). 10 (10A, 10B).
  • the edge portions in the images observed from the left eye and the right eye are corrected so as to generate parallax.
  • the observer 1 can stereoscopically view from the position where the observer 1 is present.
  • a person (object existing on the display surface side) existing on the display surface side displayed by the display device 10 (10A, 10B) of the display system 100 (100A, 100B, 100C) is observed. It is called person 1.
  • the observer 1 is, for example, a person who is looking at the display surface displayed by the display device 10 (a person who sees it), a person who is trying to see (a person who is trying to see), or a person who can see (a person who can see) Person), or a person who is simply present on the display surface side.
  • the viewing position at which the stereoscopic image can be viewed stereoscopically is a position within the viewing area where the stereoscopic viewing is possible, which is determined by, for example, the distance and angle with respect to the display surface displaying the stereoscopic image.
  • this “viewing area” or “viewing position” in the “viewing area” is described as “viewing position”.
  • the contour of the stereoscopic image exemplified in each of the above embodiments is exemplified by the vertical direction (Y-axis (FIG. 1)) of the display device 10, but the contour of the display device 10 in the horizontal direction and the oblique direction is also illustrated. Can be applied.
  • the correction amount for correcting the contour image in a direction orthogonal to the extending direction of the target contour may be adjusted.
  • the position of the pixel that performs the correction of the contour image exemplified in each of the above embodiments has been described as an example of the pixel adjacent to the position of the contour before correction.
  • the position of the contour before correction is described.
  • the position may be a predetermined number of pixels from the position. For example, when a contour is shown by connecting a plurality of pixels, a predetermined number of pixels corresponding to the width of the contour may be separated.
  • the display unit that displays the contour image shows an example of display on the display unit 11 in the depth direction (( ⁇ Z) axis direction) among the display units arranged so that the displayed images overlap.
  • the display unit 12 disposed in front of the display surface 11S of the display unit 11 may be used.
  • the contour image can be displayed on at least one of the displays arranged so that the images to be displayed overlap.
  • the contour image can be displayed on a plurality of display units. Moreover, it does not restrict
  • the display system 100 may supply image information to be displayed on a head-mounted display device, a so-called head mounted display (HMD), instead of the display device 10.
  • the display system 100 may supply image information to be displayed on a stereoscopic image display device using lens shutter glasses instead of the display device 10.
  • FIG. 27 is a schematic diagram illustrating an example of a schematic configuration of a display device 2100 according to the fifth embodiment of the present invention.
  • an XYZ rectangular coordinate system is set, and the positional relationship of each part will be described with reference to this XYZ rectangular coordinate system.
  • a direction in which the display device 2100 displays an image is a positive direction of the Z axis, and orthogonal directions on a plane perpendicular to the Z axis direction are an X axis direction and a Y axis direction, respectively.
  • the X-axis direction is the horizontal direction of the display device 2100
  • the Y-axis direction is the vertical upward direction of the display device 2100.
  • an outline of a configuration of the display device 2100 will be described.
  • a display device 2100 includes a first display unit 2011 as a display unit 2010 and a second display unit 2012.
  • the first display unit 2011 includes a first display surface 2011S for displaying an image in the depth position Z 1.
  • the second display unit 2012 is provided with a second display surface 2012S for displaying an image in the depth position Z 2.
  • User 1 sees from the viewpoint position VP predetermined for depth position Z VP, the first display surface 2011S, and a second display surface 2012S. Since the second display surface 2012S is a transmissive screen, when the user 1 views the second display surface 2012S from the viewpoint position VP, the image displayed on the first display surface 2011S and the second display surface 2012S are displayed. It looks like the image to overlap.
  • the first display surface 2011S displays a cubic image P11.
  • the second display surface 2012S displays a cubic image P12.
  • the second display surface 2012S is displayed in advance so that when the user 1 is viewed from the viewpoint position VP, the ridge lines of the cube indicated by the image P12 appear to overlap the ridge lines of the cube indicated by the image P11.
  • the size and position are set.
  • the image P12 displayed on the second display surface 2012S may be an image showing the display target OBJ as it is, or an image showing the outline (edge part) of the display target OBJ in an emphasized manner.
  • the image P12 displayed on the second display surface 2012S may be a contour image showing a contour portion of the display target OBJ.
  • This contour image is generated by extracting the contour portion of the image P11 displayed on the first display surface 2011S by using, for example, a differential filter.
  • the contour image may be an image in which the width of the pixel representing the contour portion is one pixel, or may be an image in which the width of the pixel representing the contour portion is a plurality of pixels.
  • the image P12 is a contour image will be described. That is, a case where the second display surface 2012S displays the contour image P12 will be described.
  • the second display surface 2012S is a transmissive screen and is located on the near side (+ Z side) of the depth position with respect to the first display surface 2011S when viewed from the viewpoint position VP.
  • the first display surface 2011S may be a transmissive screen and may be located closer to the depth position than the second display surface 2012S (+ Z side) when viewed from the viewpoint position VP.
  • the display device 2100 is highly accurate for the user 1.
  • a stereoscopic image can be perceived. For example, if the position of the ridgeline of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP and the position of the ridgeline of the cube displayed on the second display surface 2012S are precisely aligned, the display device 2100 The user 1 can perceive a highly accurate stereoscopic image.
  • the first display surface 2011 ⁇ / b> S and the second display surface 2012 ⁇ / b> S are, for example, a liquid crystal display or a screen of a liquid crystal projector, and have two-dimensionally arranged pixels.
  • the second display surface 2012S displays the ridge line of the cube on the pixel including the position Pt2 corresponding to the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP.
  • the pixel has an area corresponding to the definition of the second display surface 2012S.
  • the second display surface 2012S displays the position Pt1 of the cube ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP.
  • the position of the ridge line of the cube is precisely aligned.
  • the position Pt2 and the center position Pt3 of the pixel do not coincide with each other.
  • the alignment accuracy is lowered.
  • the display device 2100 may reduce the accuracy of the stereoscopic image perceived by the user 1 (for example, the sense of depth of the stereoscopic image). The accuracy of alignment between images will be described with reference to FIGS. 28 and 29.
  • FIG. 28 is a schematic diagram illustrating an example of a pixel configuration of the second display surface 2012S of the present embodiment.
  • the second display surface 2012S has a plurality of pixels arranged two-dimensionally on the XY plane. Some of the pixels indicate the ridges of a cube that is the display target OBJ.
  • a plurality of pixels Px (pixels Px11 to Px33) indicating the ridgelines of the cube will be described with reference to FIG.
  • FIG. 29 is a schematic diagram illustrating an example of a pixel Px on the second display surface 2012S of the present embodiment.
  • the plurality of pixels Px indicating the cubic ridge lines are nine pixels including a central pixel Px22 and surrounding pixels Px11 to Px33.
  • the central pixel Px22 is a pixel including a position Pt2 corresponding to the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP.
  • the position Pt3 described above is the center position of the pixel Px22 here.
  • a case will be described in which the position Pt2 is shifted by a distance ⁇ Pt in the (+ Y) direction with respect to the position Pt3.
  • the position Pt2 and the center position Pt3 of the pixel coincide with each other, the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP and the second display surface
  • the position of the ridgeline of the cube displayed by 2012S is precisely aligned.
  • the position Pt2 is shifted from the center position Pt3 of the pixel by the distance ⁇ Pt in the (+ Y) direction
  • the ridge line is shifted in the ( ⁇ Y) direction from the original position where the cube ridge line is to be displayed. Will be displayed.
  • the display device 2100 of this embodiment includes a contour correction unit 2013 that corrects the contour image P12.
  • the contour correcting unit 2013 corrects the contour image P12 based on the direction of deviation between the position Pt2 and the position Pt3.
  • the display device 2100 uses the contour correction unit 2013 to improve the accuracy of the stereoscopic image perceived by the user 1.
  • a specific configuration of the display device 2100 including the contour correction unit 2013 will be described.
  • FIG. 30 is a schematic diagram illustrating an example of a specific configuration of the display device 2100 of the present embodiment.
  • the display device 2100 includes the first display unit 2011, the second display unit 2012, and the contour correction unit 2013.
  • the first display unit 2011 includes a first display surface 2011S.
  • the first display unit 2011 emits the light R11 to the user 1 by displaying the image P11 on the first display surface 2011S.
  • the light R11 passes through the second display surface 2012S, which is a transmissive screen, and reaches the user 1.
  • the first display surface 2011S converts the image information D11 supplied from the image supply device 2002 into an image P11 and displays it.
  • the image information D11 is, for example, image information indicating a cube that is the display target OBJ.
  • the second display unit 2012 includes a second display surface 2012S.
  • the second display unit 2012 emits the light R12 to the user 1 by displaying the contour image P12 on the second display surface 2012S.
  • the second display surface 2012S displays the image information D12 supplied from the image supply device 2002 after converting the corrected image information D12C corrected by the contour correcting unit 2013 into a contour image P12.
  • the image information D12 is, for example, image information of a contour image indicating a contour portion of a cube that is the display target OBJ.
  • the corrected image information D12C is image information of a corrected contour image obtained by correcting the image information D12 by the contour correcting unit 2013.
  • the display device 2100 displays the contour image P12 and the contour image indicating the contour portion of the display object OBJ in an overlapping manner.
  • a stereoscopic effect can be given to the display object OBJ observed by the user 1. That is, when the user 1 sees the contour image P12 and the contour image indicating the contour portion of the display target OBJ in an overlapping manner, the user 1 perceives a stereoscopic image in which the display target OBJ jumps out in the Z-axis direction.
  • a mechanism for giving a stereoscopic effect to the user 1 using an image displayed on the display device 2100 will be described with reference to FIGS. 31 to 37D.
  • FIG. 31 is a schematic diagram illustrating an example of an image P11 of the display target OBJ according to the present embodiment.
  • the display target OBJ is a square pattern as an example will be described below.
  • This square pattern is a square pattern in which four sides of equal length intersect at right angles at each vertex.
  • This rectangular pattern has four sides as contour lines that separate the outside and the inside of the rectangular pattern.
  • An observer who sees the square pattern perceives the contour line as the edge portion E of the square pattern when the difference in brightness between the outside and the inside of the square pattern is large. That is, the edge portion E is a portion of the display object OBJ in which the difference in ambient brightness is relatively larger than the difference in brightness in other portions.
  • each side can be an edge portion E, but here, of each edge portion E, two sides parallel to the Y-axis direction of the square pattern are defined as an edge portion E1 and an edge portion E2, respectively. explain.
  • FIG. 32 is a schematic diagram illustrating an example of the contour image P12 of the present embodiment.
  • the contour image P12 is an image including the edge image PE1 indicating the edge portion E1 of the square pattern and the edge image PE2 indicating the edge portion E2. That is, when the display target OBJ is a quadrangular pattern as described above, the second display surface 2012S displays the contour image P12 including the edge image PE1 and the edge image PE2 corresponding to the edge portion E1 and the edge portion E2, respectively. To do.
  • FIG. 33 is a schematic diagram illustrating an example of a positional relationship among the image P11, the contour image P12, and the viewpoint position VP in the present embodiment.
  • the viewpoint position VP at the position Z VP seen superimposed a contour image P12 at the position Z 2, and an image P11 in the position Z 1.
  • the contour image P12 is an image including the edge image PE1 and the edge image PE2 corresponding to the edge portion E1 and the edge portion E2 of the square pattern as the display object OBJ.
  • the position in the X direction of the edge portion E1 of the square pattern is the position X2
  • the position in the X direction of the edge portion E2 is the position X5.
  • the second display surface 2012S displays the contour image P12 so that the edge portion E1 at the position X2 and the edge image PE1 of the contour image P12 appear to overlap at the viewpoint position VP.
  • the second display surface 2012S displays the contour image P12 so that the edge portion E2 at the position X5 and the edge image PE2 of the contour image P12 appear to overlap at the viewpoint position VP.
  • the observer superimposes the contour image P12 displayed in this way and the image P11 showing the display object OBJ (for example, a square pattern), the brightness is so small that it cannot be recognized on the retina image of the observer. There is a step.
  • a virtual contour is perceived between steps of brightness (for example, luminance), and the contour image P12 and the image P11 are perceived as one image.
  • the virtual contour is slightly shifted and perceived as binocular parallax.
  • the apparent depth position of the image P11 changes.
  • the optical image IML viewed by the left eye L and the optical image IMR viewed by the right eye R will be described in this order, and a mechanism for changing the apparent depth position of the image P11 will be described.
  • FIG. 34 is a schematic diagram illustrating an example of an optical image IM that can be seen by an observer's eye in the present embodiment.
  • FIG. 34 (L) is a schematic diagram showing an example of an optical image IML that can be seen by the left eye L of the observer.
  • the edge image PE1 and the edge portion E1 is, the position X2 ⁇ position X3 Appears overlapping in range.
  • edge portion E1 of the square pattern is located at the position X2, the edge image PE1 and the edge portion E1 are located on the left eye L of the observer inside the square pattern (in the + X direction) than the edge portion E1.
  • Appear to overlap Further, at the position of the left eye L of the viewpoint position VP at the position Z VP, looking at the image P11 and the contour image P12, edge image PE2 and the edge E2 is visible overlap in the region of the position X5 ⁇ position X6 .
  • the edge portion E2 of the square pattern is at the position X5, the edge image PE2 and the edge portion E2 are outside the square pattern (+ X direction) than the edge portion E2 in the left eye L of the observer.
  • Appear to overlap since the edge portion E2 of the square pattern is at the position X5, the edge image PE2 and the edge portion E2 are outside the square pattern (+ X direction) than the edge portion E2 in the left eye L of the observer.
  • FIG. 35 is a graph showing an example of the brightness of the optical image IM at the viewpoint position VP of the present embodiment.
  • FIG. 35 (L) is a graph showing an example of the brightness of the optical image IML at the position of the left eye L at the viewpoint position VP.
  • an optical image IML having a brightness obtained by combining the brightness of the image P11 and the brightness of the image (contour image) P12L seen from the position of the left eye L is generated.
  • the brightness inside the square pattern viewed from the viewpoint position VP is brightness BR2. Further, the brightness of the outside of the rectangular pattern viewed from the viewpoint position VP and the brightness 0 (zero).
  • the position of the edge portion E1 of the pattern square at the position Z 1 is located X2, the position of the edge portion E2 is the position X5.
  • the brightness viewed from the viewpoint position VP of the square pattern is the brightness BR2 at the positions X2 to X5, and the brightness at the positions from the position X2 to the ( ⁇ X) direction and from the position X5 to the (+ X) direction. 0 (zero).
  • the brightness viewed from the viewpoint position VP of the edge image PE1 and the edge image PE2 of the contour image P12 is brightness BR1.
  • the edge image PE1 of the contour image P12L seen from the position of the left eye L is displayed so as to overlap the edge portion E1 of the square pattern in the range of the position X2 to the position X3.
  • the edge image PE2 of the contour image P12L is displayed so as to overlap the edge portion E2 of the square pattern in the range of the position X5 to the position X6.
  • the brightness viewed from the viewpoint position VP of the contour image P12L is brightness BR1 at positions X2 to X3 and positions X5 to X6, and brightness 0 (zero) at other positions in the X direction.
  • the brightness of the optical image IML is brightness BR3 from position X2 to position X3, brightness BR2 from position X3 to position X5, brightness BR1 from position X5 to position X6, ( ⁇ X) direction from position X2, and position.
  • the brightness is 0 (zero) at a position in the (+ X) direction from X6.
  • FIG. 36 is a graph showing an example of a contour portion of the image P11 that is perceived by the observer in the present embodiment based on the optical image IML.
  • FIG. 36 (L) is a graph showing an example of a contour portion of the image P11 perceived by the observer based on the optical image IML at the position of the left eye L of the observer.
  • the outline portion of the image P11 is a portion in which the change in brightness is larger than the change in brightness in the surrounding portions in the portion of the optical image that shows the image P11.
  • the distribution of brightness of the image recognized by the observer by the optical image IML formed on the retina of the left eye L of the observer at the viewpoint position VP is as shown by a waveform WL in FIG. .
  • the observer observes the position on the X-axis where the change in the brightness of the recognized optical image IML is maximized (that is, the gradient of the waveform WL is maximized) at the contour portion of the image P11 being observed.
  • the observer observing the optical image IML sets the X EL position (that is, the position of the distance L EL from the origin O of the X axis) shown in FIG. Perceived to be
  • optical image IML that can be seen by the left eye L of the observer and the position of the contour portion by the optical image IML have been described.
  • optical image IMR that can be seen by the observer's right eye R and the position of the contour portion by the optical image IMR will be described.
  • edge image PE2 and the edge E2 is visible overlap in a range of positions X4 ⁇ position X5 . This is different from the fact that when the image P11 and the contour image P12 are viewed at the position of the left eye L, the edge image PE2 and the edge portion E2 appear to overlap in the range from the position X5 to the position X6. Further, as described above, since the edge portion E2 of the square pattern is located at the position X5, the edge image PE2 and the edge portion E2 are located on the right eye R of the observer inside the square pattern (see FIG. -X direction) appears to overlap. This is different from the fact that the left image L of the observer appears to overlap the edge image PE2 and the edge portion E2 at a position outside the square pattern (in the + X direction) with respect to the edge portion E2.
  • FIG. 35 (R) is a graph showing an example of the brightness of the optical image IMR at the position of the right eye R at the viewpoint position VP.
  • an optical image IMR having a brightness obtained by combining the brightness of the image P11 and the brightness of the image (contour image) P12R seen from the position of the right eye R is generated.
  • the brightness viewed from the viewpoint position VP of the square pattern as the image P11 is the same as the brightness at the position of the left eye L.
  • a specific example of brightness viewed from the viewpoint position VP of the contour image P12R will be described.
  • the brightness viewed from the viewpoint position VP of the edge image PE1 and the edge image PE2 of the contour image P12R is the brightness BR1.
  • the edge image PE1 of the contour image P12R that can be seen from the position of the right eye R is displayed so as to overlap the edge portion E1 of the square pattern in the range from the position X1 to the position X2.
  • edge image PE2 of the contour image P12R is displayed so as to overlap with the edge portion E2 of the square pattern in the range of the position X4 to the position X5. This is different from the fact that the edge image PE2 of the contour image P12L is displayed so as to overlap the edge portion E2 of the square pattern in the range of the position X5 to the position X6.
  • the brightness viewed from the viewpoint position VP of the contour image P12R is brightness BR1 at positions X1 to X2 and position X4 to position X5, and brightness 0 (zero) at other positions in the X direction.
  • the brightness of the optical image IMR is brightness BR1 from position X1 to position X2, brightness BR2 from position X2 to position X4, brightness BR3 from position X4 to position X5, ( ⁇ X) direction from position X1, and position.
  • the brightness is 0 (zero) at the position in the (+ X) direction from X5.
  • This is different from the brightness of the optical image IML in that the position X2 to position X3 is brightness BR3, the position X3 to position X5 is brightness BR2, and the position X5 to position X6 is brightness BR1.
  • FIG. 36 (R) is a graph showing an example of the contour portion of the image P11 perceived by the observer based on the optical image IMR at the position of the right eye R of the observer.
  • the distribution of brightness of the image recognized by the observer by the optical image IMR formed on the retina of the right eye R of the observer at the viewpoint position VP is as shown by a waveform WR in FIG. .
  • the observer observes the position on the X-axis where the change in brightness of the recognized optical image IMR is maximized (that is, the gradient of the waveform WR is maximized) at the contour portion of the image P11 being observed. Perceived to be. Specifically, an observer is observing an optical image IMR, the position of the X ER shown in FIG. 36 (R) (i.e., the origin O of the distance L ER of position in the X-axis) contour of the square pattern Perceived to be This is different from the case where an observer observing the optical image IML perceives the position of the distance L EL from the origin O of the X axis as a rectangular pattern outline.
  • the viewer perceives the position X EL contour portion of square left eye L is observed, the position X ER of contour of the square right eye R is observed as binocular parallax. Then, the observer perceives the square pattern as a stereoscopic image (three-dimensional image) based on the binocular parallax of the contour portion.
  • the contour correction unit 2013 includes a position Pt2 on the second display surface 2012S corresponding to the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP.
  • the contour image P12 is corrected based on the center position Pt3 of the pixel Px22 of the contour image P12. That is, the contour correcting unit 2013 displays the position Pt3 of the pixel Px22 (contour pixel) displaying the contour image P12 among the pixels of the second display unit 2012 and the first display of the contour corresponding to the pixel Px22 (contour pixel).
  • the contour image P12 is corrected based on the position Pt1 on the part 2011 and the position Pt2 (contour position) on the second display part 2012 determined based on the predetermined viewpoint position VP.
  • the position Pt2 is the second display surface 2012S corresponding to the contour displayed on the first display surface 2011S when the first display surface 2011S and the second display surface 2012S are viewed from the viewpoint position VP. It is the upper position.
  • the contour image displayed on the second display surface 2012S is displayed around the position Pt2, the image P11 and the contour image P12 are precisely aligned. Therefore, as shown in the figure, when the position Pt2 is deviated from the center position Pt3 of the pixel, the contour correcting unit 2013 corrects the pixel values of the pixels Px11 to Px33.
  • the contour correcting unit 2013 sets the pixel Px11 so that the position Pt2 becomes the center of gravity when the pixel values of the pixels around the pixel Px22 (pixels Px11 to Px33) are weighted and averaged based on the distance on the XY plane. -Correct the pixel value of the pixel Px33.
  • the contour correction unit 2013 when the position Pt2 is shifted in the (+ Y) direction with respect to the center position Pt3 of the pixel Px22, the contour correction unit 2013 is in the (+ Y) direction of the pixel Px22. The pixel value of the adjacent pixel Px12 is corrected. As shown in FIG. 37B, when the position Pt2 is shifted in the ( ⁇ Y) direction with respect to the center position Pt3 of the pixel Px22, the contour correcting unit 2013 is adjacent to the ( ⁇ Y) direction of the pixel Px22. The pixel value of the current pixel Px32 is corrected.
  • the contour correcting unit 2013 when the position Pt2 is shifted in the ( ⁇ X) direction with respect to the center position Pt3 of the pixel Px22, the contour correcting unit 2013 is adjacent to the ( ⁇ X) direction of the pixel Px22.
  • the pixel value of the current pixel Px21 is corrected.
  • the pixel Px21 is a pixel displaying the contour portion indicated by the contour image P12 even before correction.
  • the contour correcting unit 2013 corrects the pixel value of the pixel Px21 with a pixel value obtained by adding the corrected pixel value of the pixel Px22 to the pixel value of the contour image P12 before the correction of the pixel Px21.
  • FIG. 37C when the position Pt2 is shifted in the ( ⁇ X) direction with respect to the center position Pt3 of the pixel Px22, the contour correcting unit 2013 is adjacent to the ( ⁇ X) direction of the pixel Px22.
  • the pixel value of the current pixel Px21 is corrected.
  • the contour correction unit 2013 is adjacent to the (+ X) direction of the pixel Px22.
  • the pixel value of the current pixel Px23 is corrected.
  • the contour correcting unit 2013 corrects the pixel values of pixels around (for example, adjacent to) the pixel Px22 (contour pixel) displaying the contour image P12 among the pixels of the second display unit 2012. Accordingly, the display device 2100 can improve the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S.
  • amendment part 2013 demonstrated as correcting the pixel value of the pixel (for example, adjacent) surrounding the pixel Px22 (contour pixel) which displays the outline image P12 here, it is not restricted to this.
  • the user 1 viewing the first display surface 2011S and the second display surface 2012S from the viewpoint position VP may correct the position Pt2 so that the position Pt1 is perceived to overlap the position Pt1, and the contour correction unit 2013 can detect the pixel Px22. It can also be configured to correct the pixel values of neighboring pixels.
  • the neighboring pixels are not necessarily adjacent to each other.
  • the neighboring pixel may be a pixel at a position further adjacent to the pixel adjacent to the pixel.
  • the contour correcting unit 2013 has described the configuration for correcting the pixel value of the pixel in the direction in which the position Pt2 is deviated from the center position Pt3 of the pixel Px22, the present invention is not limited to this.
  • the contour correcting unit 2013 may correct the pixel value of the pixel in the direction opposite to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22.
  • the contour correction unit 2013 may correct the pixel value of the pixel Px32 in the direction opposite to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22. .
  • the contour correcting unit 2013 may correct the pixel value of a pixel that is in an oblique direction with respect to the direction in which the position Pt2 is deviated from the center position Pt3 of the pixel Px22. Specifically, in FIG. 37A, the contour correction unit 2013 corrects the pixel value of the pixel Px11 in the oblique direction with respect to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22. Also good. In this case, the contour correction unit 2013 also selects any of the pixels Px13, Px31, and Px33 that are oblique to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22.
  • the contour correction unit 2013 combines any or all of the pixels in the direction shifted from the position Pt3, the pixel in the opposite direction, and the pixel in the oblique direction. Thus, the pixel values of the plurality of pixels may be corrected.
  • contour correction unit 2013 has been described here as correcting the pixel value of one pixel, the present invention is not limited to this.
  • the contour correction unit 2013 includes a plurality of pixels in the vicinity of the pixel Px22 so that the user 1 viewing the first display surface 2011S and the second display surface 2012S from the viewpoint position VP perceives the position Pt2 overlapping the position Pt1. It can also be configured to correct the pixel value of the pixel.
  • FIG. 38 is a flowchart showing an example of the operation of the display device 2100 of the present embodiment.
  • the first display unit 2011 acquires the image information D11 from the image supply device 2002, and displays the image P11 based on the image information D11 on the first display surface 2011S (step S2010).
  • the contour correction unit 2013 acquires the image information D12 from the image supply device 2002 (step S2020).
  • the contour correcting unit 2013 generates corrected image information D12C obtained by correcting the acquired image information D12 based on the position Pt2 and the position Pt3.
  • the contour correcting unit 2013 compares the position Pt2 that is the contour position with the position Pt3 that is the center position of the pixel, and determines the direction of the shift of the contour position with respect to the center position of the pixel (step S2030). ). If the contour correction unit 2013 determines that the position Pt2 is displaced in the (+ Y) direction with respect to the position Pt3 that is the center position of the pixel (step S2030-UP), the contour correction unit 2013 sets the pixel value of the pixel Px12. to correct.
  • the contour correcting unit 2013 determines the position of the pixel Px32. Correct the pixel value. Similarly, the contour correcting unit 2013 determines that the position Pt2 is shifted in the ( ⁇ X) direction with respect to the position Pt3 (step S2030 ⁇ LEFT), or is determined to be shifted in the (+ X) direction. In the case (step S2030-RIGHT), the pixel values of the pixel Px21 and the pixel Px23 are corrected.
  • the second display unit 2012 acquires the corrected image information D12C from the contour correcting unit 2013, and displays the contour image P12 based on the corrected image information D12C on the second display surface 2012S (step S2040).
  • the display device 2100 repeats these steps S2010 to S2040 to display the image P11 and the contour image P12 while correcting the contour image P12.
  • the display device 2100 of this embodiment includes the first display unit 2011, the second display unit 2012, and the contour correction unit 2013.
  • the first display unit 2011 displays an image to be displayed at the first depth position.
  • the second display unit 2012 has a plurality of pixels arranged two-dimensionally, and displays a contour image indicating a contour portion to be displayed at a second depth position different from the first depth position.
  • the contour correction unit 2013 also includes the position of the contour pixel that displays the contour image among the pixels of the second display unit 2012, the position of the contour corresponding to the contour pixel on the first display unit 2011, and a predetermined viewpoint position. The contour image is corrected based on the contour position on the second display unit 2012 determined based on the VP.
  • the first display surface 2011S and the second display surface 2012S are, for example, a screen of a liquid crystal display or a liquid crystal projector, and have pixels arranged two-dimensionally. Since this pixel has an area, there may be a deviation between the position Pt2 corresponding to the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP and the position Pt3 of the center of the pixel.
  • the position Pt2 is shifted from the center position Pt3 of the pixel, the alignment accuracy between the contour image P12 displayed on the second display surface 2012S and the image P11 displayed on the first display surface 2011S is lowered. . In this case, the user 1 cannot perceive an accurate stereoscopic image.
  • the contour correction unit 2013 of the present embodiment corrects the position of the contour (for example, a cubic ridge line) indicated by the contour image P12 based on the shift between the position Pt2 and the center position Pt3 of the pixel.
  • the display device 2100 reduces the degree to which the accuracy of alignment between the contour image P12 displayed on the second display surface 2012S and the image P11 displayed on the first display surface 2011S is reduced. In this way, the display device 2100 can cause the user 1 to perceive a highly accurate stereoscopic image.
  • the contour correction unit 2013 corrects the pixel value of the pixel in the vicinity of the pixel Px22 (contour pixel) based on the correction amount based on the distance between the center position Pt3 and the position Pt2 (contour position) of the pixel Px22 (contour pixel). May be. Specifically, when the position Pt2 and the position Pt3 are separated by the distance ⁇ Pt, the contour correcting unit 2013 sets the pixel value of the pixel to be corrected by the correction amount according to the distance ⁇ Pt. For example, the contour correction unit 2013 sets the pixel value of the pixel to be corrected by increasing the correction amount as the position Pt2 is farther from the center position Pt3 of the pixel. Thereby, the outline correction
  • the contour correction unit 2013 also includes a first contour position Pt2L determined based on the position of the left eye L of the user 1, a second contour position Pt2R determined based on the position of the right eye R of the user 1, and a pixel Px22.
  • the configuration may be such that the contour image P12 is corrected based on the position Pt3 of (contour pixel).
  • FIG. 39 is a schematic diagram illustrating an example of a positional relationship between the user 1 and the display device 2100 according to a modification of the present embodiment.
  • the first display surface 2011S displays the cubic image P11.
  • the second display surface 2012S displays a cubic outline image P12.
  • the user 1 sees overlapped with the first display surface 2011S in the depth position Z 1 from the viewpoint position VP in the depth position Z VP, and a second display surface 2012S in the depth position Z 2.
  • the position Pt1 of the cube ridgeline displayed on the first display surface 2011S and the position Pt2 on the second display surface 2012S appear to overlap.
  • the left eye L and the right eye R of the user 1 are different from each other in the position on the second display surface 2012S that appears to overlap the position Pt1 of the cube ridgeline displayed on the first display surface 2011S.
  • the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S and the position Pt2L on the second display surface 2012S appear to overlap.
  • the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S and the position Pt2R on the second display surface 2012S appear to overlap.
  • the contour correcting unit 2013 uses the position of the midpoint on the XY plane between the position Pt2L on the second display surface 2012S and the position Pt2R as a reference for correcting the pixel value described above (contour position). To correct the contour image P12.
  • the contour correcting unit 2013 can correct the contour image P12 based on the positions of the left eye L and the right eye R of the user 1, so that the correction accuracy can be improved. Therefore, the display device 2100 can cause the user 1 to perceive a more accurate stereoscopic image.
  • a display device 2100a according to a sixth embodiment of the present invention will be described with reference to FIG.
  • the display device 2100a of the present embodiment is different from the above-described embodiment in that the display device 2100a includes a detection unit 2014 that detects the viewpoint position VP.
  • symbol is attached
  • FIG. 40 is a schematic diagram showing an example of the configuration of the display device 2100a according to the sixth embodiment of the present invention.
  • the display device 2100a includes a contour correction unit 2013a and a detection unit 2014.
  • the detection unit 2014 includes a distance measuring sensor, detects the position where the user 1 is, outputs the detected position as the viewpoint position VP, and outputs information indicating the viewpoint position VP to the contour correction unit 2013a.
  • the contour correction unit 2013a acquires information indicating the viewpoint position VP detected by the detection unit 2014, and calculates a position Pt2 (contour position) on the second display unit 2012 based on the information indicating the acquired viewpoint position VP. . Then, the contour correcting unit 2013a corrects the contour image P12 based on the calculated position Pt2 and the position Pt3 of the pixel Px22 (contour pixel) that displays the contour image P12. In other words, the contour correcting unit 2013a determines the position Pt2 (contour position) determined based on the position Pt3 of the contour pixel, the position Pt1 on the first display unit 2011 of the contour corresponding to the contour pixel, and the detected viewpoint position VP. Based on the above, the contour image P12 is corrected.
  • the display device 2100a can detect the position where the user 1 is present as the viewpoint position VP. For example, even if the user 1 moves, the contour image P12 is corrected according to the position where the user 1 moves. can do. Therefore, even if the position of the user 1 changes, the display device 2100a improves the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S. Can do.
  • the detection unit 2014 may detect the center position of the face of the user 1 as the viewpoint position VP.
  • the detection unit 2014 includes a video camera (not shown) and an image analysis unit.
  • the image analysis unit analyzes an image captured by the video camera and extracts the face of the user 1. Is detected as the viewpoint position VP.
  • the center position of the face of the user 1 includes the position of the center of gravity based on the outline of the face of the user 1 and the position of the midpoint between the position of the left eye L and the position of the right eye R of the user 1.
  • the display device 2100a can set the viewpoint position VP according to the orientation of the face. Therefore, the display device 2100a aligns the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S even when the orientation of the face as well as the position of the user 1 changes. Accuracy can be improved.
  • the contour correcting unit 2013a may correct the contour image P12 with a correction amount based on the distance to the user 1 detected by the detecting unit 2014.
  • the contour correcting unit 2013a may correct the contour image P12 with a correction amount based on the distance to the user 1 detected by the detecting unit 2014.
  • the user 1 is in a position close to the display device 2100a, a detailed portion of the image is easily perceived by the user 1, so that the image P11 displayed on the first display surface 2011S and the second display surface 2012S are displayed.
  • the position shift from the contour image P12 to be performed is easily perceived by the user 1.
  • the user 1 is at a position far from the display device 2100a, the detailed portion of the image is less likely to be perceived by the user 1, and thus the image P11 displayed on the first display surface 2011S and the second display surface 2012S are displayed.
  • the positional deviation from the contour image P12 is less likely to be perceived by the user 1.
  • the contour correction unit 2013a reduces the correction amount to reduce the contour image P12. Make corrections.
  • the contour correction unit 2013a may not perform correction when the distance to the user 1 detected by the detection unit 2014 is further larger (for example, greater than a predetermined threshold value).
  • the contour correction unit 2013a increases the correction amount to increase the contour image P12. Make corrections.
  • the contour correction unit 2013a corrects the contour image P12 according to the distance to the user 1 detected by the detection unit 2014, so that even if the position of the user 1 changes, the contour P11 and the contour image The positional deviation from P12 can be made difficult to be perceived by the user 1. Therefore, even if the position of the user 1 changes, the display device 2100a improves the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S. Can do.
  • the display device 2100b of the present embodiment is different from the above-described embodiment in that it includes a first display unit 2011b, a second display unit 2012b, and a third display unit 2015.
  • symbol is attached
  • FIG. 41 is a schematic diagram showing an example of the configuration of a display device 2100b according to the seventh embodiment of the present invention.
  • the display device 2100b includes a first display unit 2011b, a second display unit 2012b, and a third display unit 2015.
  • the first display unit 2011b includes a first display surface 2011Sb for displaying an image in the depth position Z 1.
  • the second display unit 2012b includes a second display surface 2012Sb for displaying an image in the depth position Z 2.
  • the first display unit 2011 and the second display unit 2012 may be either a monochrome (monochrome) display unit or a color display unit.
  • the first display unit 2011b and the second display unit 2012b are monochrome display units. That is, the first display unit 2011b displays an image P11b that is a monochrome image on the first display surface 2011Sb.
  • the second display unit 2012b displays an image P12b that is a monochrome image on the second display surface 2012Sb.
  • the monochrome image is an image having only pixel values of brightness (for example, luminance) without pixel values of chromaticity and saturation.
  • This monochrome image includes a monochrome binary image or a gray-grey image of white and black.
  • the image P11b includes an image of the display target OBJ. That is, the first display unit 2011b displays the image P11b including the image of the display target OBJ on the first display surface 2011Sb. Further, the image P12b includes an edge image PE ′ that shows the contour portion of the display object OBJ.
  • the edge image PE ′ is a monochrome image showing the outline of the display object OBJ.
  • the second display unit 2012b displays the edge image PE ′ indicating the outline of the display target OBJ on the second display surface 2012Sb.
  • the first display unit 2011b and the second display unit 2012b display these images to generate a stereoscopic image of the display target OBJ.
  • the stereoscopic image of the display object OBJ is a monochrome stereoscopic image.
  • the third display unit 2015 is a color display unit that displays an image P15 that is a color image.
  • the image P15 is an image corresponding to the image P11b and the image P12b.
  • the third display unit 2015 gives a color to the images P11b and P12b, which are monochrome images, by displaying the image P15. That is, when the user 1 superimposes these images from the viewpoint position VP, the images P11b and P12b that are monochrome images appear to be color images.
  • the display device 2100b can improve the accuracy of the stereoscopic image perceived by the user 1 by displaying the image P11b and the image P12b, both of which are monochrome images.
  • a color image may display more information than a monochrome image.
  • the stereoscopic image generated by the images P11b and P12b is displayed by overlapping the image P15, that is, the color image, thereby improving the accuracy of the stereoscopic image perceived by the user 1 and improving the stereoscopic image.
  • the amount of information can be increased.
  • the display device 2100b can increase the information amount of the stereoscopic image while improving the accuracy of the stereoscopic image, and can cause the user 1 to perceive the stereoscopic image.
  • each part with which the display system 100 in said embodiment is provided and each part with which each display apparatus (display apparatus 2100, 2100a, 2100b. These are generically described as a display apparatus) in said embodiment are provided. It may be realized by dedicated hardware, or may be realized by a memory and a microprocessor.
  • each part with which the display system 100 is provided, and each part with which a display apparatus is provided are comprised by memory and CPU (arithmetic processing apparatus, central processing unit), and implement
  • the function may be realized by loading a program to be executed into a memory and executing the program.
  • each unit included in the display system 100 and a program for realizing the function of each unit included in the display device are recorded on a computer-readable recording medium, and the computer system reads the program recorded on the recording medium. By executing, the processing by each of the above-described units may be performed.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line. In this case, it also includes those that hold a program for a certain period of time, such as a volatile memory inside a computer system serving as a server or client in that case.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.

Abstract

This display device is equipped with: a first screen for displaying a first image that includes an object to be displayed and is based on first image data; a second screen for displaying a second image that includes the object to be displayed and is based on second image data; a detection unit for detecting the position of an observer who observes the first screen and the second screen; and a control unit that corrects an image data portion of the second image data that pertains to the vicinity of the contour of the object to be displayed on the basis of the observer's position detected by the detection unit and causes the result to be displayed on the second screen.

Description

画像処理装置、表示装置及びプログラムImage processing apparatus, display apparatus, and program
 本発明は、画像処理装置、表示装置及びプログラムに関する。
 本願は、2013年1月31日に出願された特願2013-17968号、及び2013年1月31日に出願された特願2013-17969号に基づき優先権を主張し、その内容をここに援用する。
The present invention relates to an image processing device, a display device, and a program.
This application claims priority based on Japanese Patent Application No. 2013-17968 filed on January 31, 2013, and Japanese Patent Application No. 2013-17969 filed on January 31, 2013, the contents of which are incorporated herein by reference. Incorporate.
 画像処理装置には、3次元表示を可能とする表示システムに表示する画像情報を生成するものがある。このような3次元表示を可能とする表示システムに表示する画像情報は、3次元表示用の画像情報として専用に作成されている場合が多い。一方、2次元表示用の画像情報を基にして、3次元表示を行う技術がある(例えば、特許文献1参照)。 Some image processing apparatuses generate image information to be displayed on a display system capable of three-dimensional display. In many cases, image information displayed on such a display system that enables three-dimensional display is created exclusively as image information for three-dimensional display. On the other hand, there is a technique for performing three-dimensional display based on image information for two-dimensional display (see, for example, Patent Document 1).
 また、近年、例えば、立体像(3次元画像)の奥行き位置に応じた画素値(例えば、明るさ、輝度、色相、彩度)の比を付けた複数の画像を、奥行き位置の異なる複数の表示面に表示させることによって、これら複数の画像を重ねて見たユーザに立体感を知覚させる表示方法が知られている(例えば、特許文献2を参照)。 In recent years, for example, a plurality of images with ratios of pixel values (for example, brightness, luminance, hue, saturation) corresponding to the depth position of a stereoscopic image (three-dimensional image) are There is known a display method in which a user who perceives a plurality of images as a result of displaying them on a display surface perceives a stereoscopic effect (see, for example, Patent Document 2).
特開2003-057596号公報JP 2003-057596 A 特許第3464633号公報Japanese Patent No. 3464633
 しかしながら、特許文献1に記載の技術によれば、2次元表示用の画像の輝度を調節して重ねて表示していることにより、2つの画像によって示される表示対象が、ずれて重なった状態で表示され、視認性が低下してしまうという問題がある。 However, according to the technique described in Patent Document 1, the brightness of the image for two-dimensional display is adjusted and displayed in an overlapping manner, so that the display target indicated by the two images is shifted and overlapped. There is a problem that the visibility is reduced.
 また、上記のような表示方法においては、画素の精細度によっては、ユーザから見た複数の画像の位置がずれてしまうことがある。この場合には、上記のような表示方法によっては、ユーザに知覚される立体像の精度を向上させることができないという問題があった。 In the display method as described above, the positions of a plurality of images viewed from the user may be shifted depending on the definition of the pixels. In this case, there is a problem that the accuracy of the stereoscopic image perceived by the user cannot be improved depending on the display method as described above.
 本発明の態様は、表示される立体画像の視認性を向上させる画像処理装置、及びプログラムを提供することを目的とする。
 また、別の目的は、ユーザに知覚される立体像の精度を向上させることができる表示装置を提供することにある。
An object of an aspect of the present invention is to provide an image processing device and a program that improve the visibility of a displayed stereoscopic image.
Another object is to provide a display device that can improve the accuracy of a stereoscopic image perceived by a user.
 本発明の一態様に係る表示装置は、表示対象を含む、第1画像データに基づく第1画像を表示する第1表示面と、前記表示対象を含む、第2画像データに基づく第2画像を表示する第2表示面と、前記第1表示面と前記第2表示面とを観察する観察者の位置を検出する検出部と、前記第2画像データのうち前記表示対象の輪郭部近傍の画像データを前記検出部で検出された前記観察者の位置に基づいて補正し、前記第2表示面に表示させる制御部とを備える。 A display device according to an aspect of the present invention includes a first display surface that displays a first image based on first image data including a display target, and a second image based on second image data including the display target. A second display surface to be displayed; a detection unit for detecting a position of an observer observing the first display surface and the second display surface; and an image in the vicinity of the contour portion to be displayed in the second image data. A control unit that corrects data based on the position of the observer detected by the detection unit and displays the corrected data on the second display surface.
 また、本発明の一態様に係るプログラムは、表示対象を含む、第1画像データに基づく第1画像を表示する第1表示面と、前記表示対象を含む、第2画像データに基づく第2画像を表示する第2表示面と、前記第1表示面と前記第2表示面とを観察する観察者の位置を検出する検出部と、を有する表示装置のコンピュータに、前記第2画像データのうち前記表示対象の輪郭部近傍の画像データを前記検出部で検出された前記観察者の位置に基づいて補正し、前記第2表示面に表示させる制御ステップを実行させる。 A program according to an aspect of the present invention includes a first display surface that displays a first image based on first image data including a display target, and a second image based on second image data including the display target. Of the second image data in a computer of a display device comprising: a second display surface for displaying the image; and a detection unit for detecting a position of an observer observing the first display surface and the second display surface. The control step of correcting the image data in the vicinity of the contour portion to be displayed based on the position of the observer detected by the detection unit and displaying the correction on the second display surface is executed.
 また、本発明の一態様に係る表示装置は、対象物を含む、第1画像データに基づく第1画像を表示する第1表示面と、前記第1表示面を観察する観察者と前記第1表示面との相対位置を検出する検出部と、前記第1画像データのうち前記対象物の輪郭近傍の画像データを前記検出部で検出された前記相対位置に基づいて補正し、前記第1表示面に表示させる制御部と、を備える。 The display device according to one aspect of the present invention includes a first display surface that displays a first image based on first image data including an object, an observer who observes the first display surface, and the first A detection unit for detecting a relative position with respect to a display surface; and correcting image data in the vicinity of the contour of the object in the first image data based on the relative position detected by the detection unit, A control unit for displaying on the surface.
 また、本発明の一態様に係るプログラムは、対象物を含む、第1画像データに基づく第1画像を表示する第1表示面と、前記第1表示面を観察する観察者と前記第1表示面との相対位置を検出する検出部と、を有する表示装置のコンピュータに、前記第1画像データのうち前記対象物の輪郭近傍の画像データを前記検出部で検出された前記相対位置に基づいて補正し、前記第1表示面に表示させる制御ステップを実行させる。 In addition, a program according to an aspect of the present invention includes a first display surface that displays a first image based on first image data including an object, an observer who observes the first display surface, and the first display. A detection unit that detects a relative position with respect to the surface; and a computer of a display device having image data in the vicinity of the contour of the object among the first image data based on the relative position detected by the detection unit. The control step of correcting and displaying on the first display surface is executed.
 また、本発明の一態様に係る表示装置は、表示対象の画像を第1奥行き位置に表示する第1表示部と、2次元に配列された複数の画素を有し、前記第1奥行き位置とは異なる第2奥行き位置に、前記表示対象の輪郭部を示す輪郭画像を表示する第2表示部と、前記第2表示部の画素のうち前記輪郭画像を表示する輪郭画素の位置と、前記輪郭画素に対応する前記輪郭部の前記第1表示部上の位置および所定の視点位置に基づいて定められる前記第2表示部上の輪郭位置とに基づいて、前記輪郭画像を補正する輪郭補正部と、を備える。 In addition, a display device according to an aspect of the present invention includes a first display unit that displays an image to be displayed at a first depth position, and a plurality of pixels that are two-dimensionally arranged. A second display unit that displays a contour image indicating the contour portion to be displayed at different second depth positions, a position of a contour pixel that displays the contour image among the pixels of the second display unit, and the contour A contour correcting unit that corrects the contour image based on a position on the first display unit of the contour unit corresponding to a pixel and a contour position on the second display unit determined based on a predetermined viewpoint position; .
 また、本発明の一態様に係る表示装置は、第1表示部が表示対象の画像を表示する第1奥行き位置とは異なる第2奥行き位置に第2表示部が表示する前記表示対象の輪郭部を示す輪郭画像について、前記第2表示部が有する2次元に配列された複数の画素のうち前記輪郭画像を表示する輪郭画素の位置と、前記輪郭画素に対応する前記輪郭部の前記第1表示部上の位置および所定の視点位置に基づいて定められる前記第2表示部上の輪郭位置とに基づいて、前記輪郭画像を補正する輪郭補正部を備える。 In the display device according to one aspect of the present invention, the outline portion of the display target displayed by the second display unit at a second depth position different from the first depth position where the first display unit displays an image to be displayed. As for the contour image indicating the position, the position of the contour pixel that displays the contour image among the plurality of pixels arranged in a two-dimensional manner of the second display unit, and the first display of the contour portion corresponding to the contour pixel A contour correction unit configured to correct the contour image based on a position on the unit and a contour position on the second display unit determined based on a predetermined viewpoint position.
 また、本発明の一態様に係るプログラムは、コンピュータに、第1表示部が表示対象の画像を表示する第1奥行き位置とは異なる第2奥行き位置に第2表示部が表示する前記表示対象の輪郭部を示す輪郭画像について、前記第2表示部が有する2次元に配列された複数の画素のうち前記輪郭画像を表示する輪郭画素の位置と、前記輪郭画素に対応する前記輪郭部の前記第1表示部上の位置および所定の視点位置に基づいて定められる前記第2表示部上の輪郭位置とに基づいて、前記輪郭画像を補正する輪郭補正ステップを実行させる。 According to another aspect of the present invention, there is provided a program for displaying the display target displayed by the second display unit at a second depth position different from the first depth position at which the first display unit displays an image to be displayed. Regarding the contour image indicating the contour portion, the position of the contour pixel that displays the contour image among the plurality of pixels arranged in a two-dimensional manner of the second display portion, and the first portion of the contour portion corresponding to the contour pixel. A contour correction step for correcting the contour image is executed based on a position on the first display unit and a contour position on the second display unit determined based on a predetermined viewpoint position.
 本発明の態様によれば、表示される立体画像の視認性を向上させる画像処理装置、及びプログラムを提供することができる。 According to the aspect of the present invention, it is possible to provide an image processing device and a program that improve the visibility of a displayed stereoscopic image.
 また、本発明の態様によれば、ユーザに知覚される立体像の精度を向上させることができる。 Further, according to the aspect of the present invention, the accuracy of the stereoscopic image perceived by the user can be improved.
本発明の第1の実施形態における表示システムの概要を示す図である。It is a figure which shows the outline | summary of the display system in the 1st Embodiment of this invention. 本実施形態における表示装置を含む表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system containing the display apparatus in this embodiment. 本実施形態における画像の一例を示す模式図である。It is a schematic diagram which shows an example of the image in this embodiment. 本実施形態における画像の一例を示す模式図である。It is a schematic diagram which shows an example of the image in this embodiment. 本実施形態における表示装置によって表示される画像の一例を示す模式図である。It is a schematic diagram which shows an example of the image displayed by the display apparatus in this embodiment. 本実施形態における光学像の一例を示す模式図である。It is a schematic diagram which shows an example of the optical image in this embodiment. 本実施形態における光学像の明るさの分布の一例を示すグラフである。It is a graph which shows an example of the brightness distribution of the optical image in this embodiment. 本実施形態における左眼と右眼とに生じる両眼視差の一例を示すグラフである。It is a graph which shows an example of the binocular parallax which arises in the left eye and right eye in this embodiment. 表示装置に表示する2つの画像のエッジの位置がずれた場合の影響を説明する図である。It is a figure explaining the influence when the position of the edge of two images displayed on a display apparatus shifts. 本実施形態による表示システムの構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the display system by this embodiment. 観察者と表示装置と対象画像における輪郭部との位置の関係を示す図である。It is a figure which shows the positional relationship of an observer, a display apparatus, and the outline part in a target image. 本実施形態の表示システムにおける輪郭画像の補正の一実施例を説明する図である。It is a figure explaining one Example of the correction | amendment of the outline image in the display system of this embodiment. 本実施形態の表示システムにおける輪郭画像の補正方法の変形例を説明する図である。It is a figure explaining the modification of the correction method of the outline image in the display system of this embodiment. 本実施形態の表示システムにおける輪郭画像の補正方法の変形例として、エッジ画像の明るさ(輝度)の調整について示す図である。It is a figure shown about the adjustment of the brightness (luminance) of an edge image as a modification of the correction method of the outline image in the display system of this embodiment. 本実施形態の表示システムにおける輪郭画像の補正方法の変形例として、エッジ画像の明るさ(輝度)の調整について示す図である。It is a figure shown about the adjustment of the brightness (luminance) of an edge image as a modification of the correction method of the outline image in the display system of this embodiment. 第2の実施形態における表示システムの概要を示す図である。It is a figure which shows the outline | summary of the display system in 2nd Embodiment. 第2の実施形態における表示システムの概要を示す図である。It is a figure which shows the outline | summary of the display system in 2nd Embodiment. 第2の実施形態における表示システムの概要を示す図である。It is a figure which shows the outline | summary of the display system in 2nd Embodiment. 本実施形態による表示システムの構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the display system by this embodiment. 第3の実施形態における表示システムの概要を示す図である。It is a figure which shows the outline | summary of the display system in 3rd Embodiment. 第3の実施形態における表示システムの概要を示す図である。It is a figure which shows the outline | summary of the display system in 3rd Embodiment. 第3の実施形態における表示システムの概要を示す図である。It is a figure which shows the outline | summary of the display system in 3rd Embodiment. 本実施形態による表示システムの構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the display system by this embodiment. レンチキュラーレンズの光学特性に応じて定まる立体画像を観察できる領域から外れる領域まで観察者が移動する場合の補正方法を説明する図である。It is a figure explaining the correction method in case an observer moves to the area | region which remove | deviates from the area | region which can observe the stereo image defined according to the optical characteristic of a lenticular lens. 観察者が移動する場合の補正方法について説明する図である。It is a figure explaining the correction method in case an observer moves. 表示システムにおける表示装置の変形例を説明する図である。It is a figure explaining the modification of the display apparatus in a display system. 3つの方向から観察した場合の画像の補正方法を示す図である。It is a figure which shows the correction method of the image at the time of observing from three directions. 第4の実施形態による表示システムの構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the display system by 4th Embodiment. 遠近法により表示した表示対象を擬似的に回転して表示させる処理を説明する図である。It is a figure explaining the process which displays the display target displayed by the perspective method by rotating in a pseudo manner. 画像処理装置が行う処理を示すフローチャートである。It is a flowchart which shows the process which an image processing apparatus performs. 立体画像の表示方式の一例を説明する図である。It is a figure explaining an example of the display system of a stereo image. 本発明の第5の実施形態における表示装置の構成の概要の一例を示す概要図である。It is a schematic diagram which shows an example of the outline | summary of a structure of the display apparatus in the 5th Embodiment of this invention. 本実施形態の第2表示面の画素の構成の一例を示す模式図である。It is a schematic diagram which shows an example of a structure of the pixel of the 2nd display surface of this embodiment. 本実施形態の第2表示面の画素の一例を示す模式図である。It is a schematic diagram which shows an example of the pixel of the 2nd display surface of this embodiment. 本実施形態の表示装置の具体的な構成の一例を示す模式図である。It is a schematic diagram which shows an example of the specific structure of the display apparatus of this embodiment. 本実施形態の表示対象の画像の一例を示す模式図である。It is a schematic diagram which shows an example of the image of the display target of this embodiment. 本実施形態の輪郭画像の一例を示す模式図である。It is a schematic diagram which shows an example of the outline image of this embodiment. 本実施形態における画像と輪郭画像と視点位置との位置の関係の一例を示す模式図である。It is a schematic diagram which shows an example of the positional relationship of the image in this embodiment, an outline image, and a viewpoint position. 本実施形態における観察者の眼に見える光学像の一例を示す模式図である。It is a schematic diagram which shows an example of the optical image visible to the observer's eyes in this embodiment. 本実施形態の視点位置における光学像の明るさの一例を示すグラフである。It is a graph which shows an example of the brightness of the optical image in the viewpoint position of this embodiment. 本実施形態における観察者が光学像に基づいて知覚する画像の輪郭部の一例を示すグラフである。It is a graph which shows an example of the outline part of the image which the observer in this embodiment perceives based on an optical image. 本実施形態の輪郭補正部が補正する輪郭画像の一例を示す模式図である。It is a schematic diagram which shows an example of the outline image which the outline correction part of this embodiment correct | amends. 本実施形態の輪郭補正部が補正する輪郭画像の一例を示す模式図である。It is a schematic diagram which shows an example of the outline image which the outline correction part of this embodiment correct | amends. 本実施形態の輪郭補正部が補正する輪郭画像の一例を示す模式図である。It is a schematic diagram which shows an example of the outline image which the outline correction part of this embodiment correct | amends. 本実施形態の輪郭補正部が補正する輪郭画像の一例を示す模式図である。It is a schematic diagram which shows an example of the outline image which the outline correction part of this embodiment correct | amends. 本実施形態の表示装置の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the display apparatus of this embodiment. 本実施形態の変形例によるユーザと表示装置との位置関係の一例を示す模式図である。It is a schematic diagram which shows an example of the positional relationship of the user and display apparatus by the modification of this embodiment. 本発明の第6の実施形態による表示装置の構成の一例を示す模式図である。It is a schematic diagram which shows an example of a structure of the display apparatus by the 6th Embodiment of this invention. 本発明の第7の実施形態による表示装置の構成の一例を示す模式図である。It is a schematic diagram which shows an example of a structure of the display apparatus by the 7th Embodiment of this invention.
 以下、図面を参照して、本発明の実施の形態について説明する。まず、立体画像表示についての概要を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. First, an overview of stereoscopic image display will be described.
[第1の実施形態]
 以下、図面を参照して、本発明の第1の実施形態を説明する。
 図1は、本実施形態における表示システムの概要を示す図である。この図に示される表示システム100は、立体視させるような画像を表示部に表示する。
 以下、各図の説明においてはXYZ直交座標系を設定し、このXYZ直交座標系を参照しつつ各部の位置関係について説明する。表示装置10が画像を表示している方向をZ軸の正の方向とし、このZ軸方向に垂直な平面上の直交方向をそれぞれX軸方向及びY軸方向とする。ここでは、X軸方向は、表示装置10の水平方向とし、Y軸方向は表示装置10の鉛直方向とする。
 表示装置10における表示部11の表示面11Sと、表示部12の表示面12Sとが視野に入る位置に観察者1がいる。図1に示す表示装置10は、Z軸の正方向(表示部12に対面する方向)にある所定の位置(Ma)(視認位置)から立体視可能なように立体画像を表示する。そして、観察者1は、視認位置から表示装置10の表示部11と表示部12とに表示されている立体画像を立体視することができる。
[First embodiment]
Hereinafter, a first embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is a diagram showing an overview of a display system in the present embodiment. The display system 100 shown in this figure displays an image that allows stereoscopic viewing on the display unit.
Hereinafter, in the description of each drawing, an XYZ rectangular coordinate system is set, and the positional relationship of each part will be described with reference to this XYZ rectangular coordinate system. A direction in which the display device 10 displays an image is a positive direction of the Z axis, and orthogonal directions on a plane perpendicular to the Z axis direction are an X axis direction and a Y axis direction, respectively. Here, the X-axis direction is the horizontal direction of the display device 10, and the Y-axis direction is the vertical direction of the display device 10.
In the display device 10, the observer 1 is at a position where the display surface 11S of the display unit 11 and the display surface 12S of the display unit 12 enter the visual field. The display device 10 shown in FIG. 1 displays a stereoscopic image so that it can be stereoscopically viewed from a predetermined position (Ma) (viewing position) in the positive direction of the Z axis (the direction facing the display unit 12). The observer 1 can stereoscopically view the stereoscopic image displayed on the display unit 11 and the display unit 12 of the display device 10 from the viewing position.
 続いて、本実施形態において立体画像を表示する表示システムについて説明する。
 図2は、本実施形態における表示装置10を含む表示システム100の構成の一例を示す構成図である。
 本実施形態の表示システム100は、画像処理装置2と、表示装置10とを備えている。
 画像処理装置2は、表示装置10に画像情報D11と画像情報D12とを供給する。ここで、画像情報D12は、表示装置10によって表示される画像P12を表示するための情報である。画像情報D11は、表示装置10によって表示される画像P11を表示するための情報であり、例えば、画像情報D12に基づいて生成されているエッジ画像PEの画像情報である。エッジ画像PEは、画像P12内のエッジ部分Eを示す画像である。このエッジ画像PEについては、図3A、3Bを参照して後述する。
Next, a display system that displays a stereoscopic image in the present embodiment will be described.
FIG. 2 is a configuration diagram illustrating an example of a configuration of the display system 100 including the display device 10 according to the present embodiment.
A display system 100 according to the present embodiment includes an image processing device 2 and a display device 10.
The image processing device 2 supplies the image information D11 and the image information D12 to the display device 10. Here, the image information D12 is information for displaying the image P12 displayed by the display device 10. The image information D11 is information for displaying the image P11 displayed by the display device 10, and is image information of the edge image PE generated based on the image information D12, for example. The edge image PE is an image showing an edge portion E in the image P12. The edge image PE will be described later with reference to FIGS. 3A and 3B.
 表示装置10は、表示部11と表示部12とを備えており、画像処理装置2から取得した画像情報D11に基づいて、画像P11を表示するとともに、画像処理装置2から取得した画像情報D12に基づいて、画像P12を表示する。
 本実施形態の表示部11及び表示部12は、(+Z)方向に表示部11、表示部12の順に配置されている。つまり、表示部11と表示部12とが、異なる奥行き位置に配置されている。ここで、奥行き位置とは、Z軸方向の位置である。
 表示部12は、(+Z)方向に向けて画像を表示する表示面12Sを備えており、画像処理装置2から取得した画像情報D12に基づいて、画像P12を表示面12Sに表示する。表示面12Sに表示された画像P12から発せられる光線(光)R12は、光学像として観察者1に視認される。
 表示部11は、(+Z)方向に向けて画像を表示する表示面11Sを備えており、画像処理装置2から取得した画像情報D11に基づいて、画像P11を表示面11Sに表示する。表示面11Sに表示された画像P11から発せられる、光線(光)R11は、光学像として観察者1に視認される。
The display device 10 includes a display unit 11 and a display unit 12. The display device 10 displays an image P 11 based on the image information D 11 acquired from the image processing device 2, and displays the image information D 12 acquired from the image processing device 2. Based on this, the image P12 is displayed.
The display unit 11 and the display unit 12 of this embodiment are arranged in the order of the display unit 11 and the display unit 12 in the (+ Z) direction. That is, the display unit 11 and the display unit 12 are arranged at different depth positions. Here, the depth position is a position in the Z-axis direction.
The display unit 12 includes a display surface 12S that displays an image in the (+ Z) direction, and displays the image P12 on the display surface 12S based on the image information D12 acquired from the image processing device 2. A light ray (light) R12 emitted from the image P12 displayed on the display surface 12S is visually recognized by the observer 1 as an optical image.
The display unit 11 includes a display surface 11S that displays an image in the (+ Z) direction, and displays the image P11 on the display surface 11S based on the image information D11 acquired from the image processing device 2. A light ray (light) R11 emitted from the image P11 displayed on the display surface 11S is visually recognized by the observer 1 as an optical image.
 本実施形態の表示部12は、表示部11によって表示されている画像P11に応じた光線R11(光)を透過することが可能な透過型表示部である。つまり、表示面12Sは、画像P12を表示するとともに、表示部11によって表示される画像P11の光線R11を透過させる。すなわち、表示装置10は、観察者1によって、画像P11と画像P12とが重なるように視認されるようにして、画像P11と画像P12とを表示する。このように、表示部11は、画像P12が表示される奥行き位置とは異なる奥行き位置に、画像P12内のエッジ部分を示す画像P11を表示する。 The display unit 12 of the present embodiment is a transmissive display unit that can transmit the light beam R11 (light) corresponding to the image P11 displayed by the display unit 11. That is, the display surface 12S displays the image P12 and transmits the light ray R11 of the image P11 displayed by the display unit 11. That is, the display device 10 displays the image P11 and the image P12 so that the viewer 1 can visually recognize the image P11 and the image P12 so as to overlap each other. In this way, the display unit 11 displays the image P11 indicating the edge portion in the image P12 at a depth position different from the depth position where the image P12 is displayed.
 次に、図3A、3Bを参照して、本実施形態の画像P12と画像P11を説明する。ここで、以下の図面において画像を示す場合には、便宜上、画像の明るさが明るい(例えば、輝度が高い)部分を濃く示している。
 図3A、3Bは、本実施形態における画像P12と画像P11の一例を示す模式図である。
 図3Aに、本実施形態における画像P12の一例を示す。図3Bに、本実施形態における画像P11の一例を示す。
 本実施形態の画像P12は、例えば、図3Aに示すように四角形のパターンを示す画像である。ここで、画像P12が示す四角形のパターンにおいては、四角形を構成する4辺がそれぞれエッジ部分になりうるが、以下の説明においては、便宜上、四角形の左辺を示すエッジ部分(左辺エッジ部分)E1と右辺を示すエッジ部分(右辺エッジ部分)E2とをエッジ部分Eとして説明する。
 本実施形態の画像P11は、例えば、図3Bに示すように四角形のパターンの左辺エッジ部分E1を示すエッジ画像(左辺エッジ画像)PE1と、右辺エッジ部分E2を示すエッジ画像(右辺エッジ画像)PE2と、を含む画像である。ここで、エッジ部分(単にエッジ、又はエッジ領域と表現してもよい)とは、例えば、画像内において隣接する又は近傍の画素の明るさ(例えば、輝度)が急変する部分である。例えば、エッジ部分Eは、図3Aに示す四角形の左辺または右辺の、幅が無い理論的な線分を示すとともに、例えば、表示部11の解像度に応じた有限の幅を有するエッジ周囲の領域をも示している。
Next, the image P12 and the image P11 of this embodiment will be described with reference to FIGS. 3A and 3B. Here, when an image is shown in the following drawings, for the sake of convenience, a portion where the brightness of the image is bright (for example, high brightness) is shown darkly.
3A and 3B are schematic diagrams illustrating an example of an image P12 and an image P11 in the present embodiment.
FIG. 3A shows an example of the image P12 in the present embodiment. FIG. 3B shows an example of the image P11 in the present embodiment.
The image P12 of the present embodiment is an image showing a square pattern as shown in FIG. 3A, for example. Here, in the quadrilateral pattern shown by the image P12, the four sides constituting the quadrilateral can be edge portions, but in the following description, for convenience, an edge portion (left side edge portion) E1 indicating the left side of the quadrilateral The edge portion (right side edge portion) E2 indicating the right side will be described as the edge portion E.
The image P11 of the present embodiment includes, for example, as shown in FIG. 3B, an edge image (left side edge image) PE1 showing a left side edge part E1 of a square pattern and an edge image (right side edge image) PE2 showing a right side edge part E2. And an image including Here, an edge portion (which may be simply expressed as an edge or an edge region) is a portion where the brightness (for example, luminance) of adjacent or neighboring pixels in the image changes suddenly. For example, the edge portion E indicates a theoretical line segment having no width on the left side or the right side of the quadrangle shown in FIG. 3A and, for example, a region around the edge having a finite width corresponding to the resolution of the display unit 11. It also shows.
 次に、図4を参照して、本実施形態の表示装置10が、画像P11と画像P12とを対応させて表示する構成について説明する。
 図4は、本実施形態における表示装置10によって表示される画像の一例を示す模式図である。
 本実施形態において、表示部11は、画像P11を観察者1に視認されるように表示する。また、表示部12は、画像P12を観察者1に視認されるように表示する。そして、画像P12は、画像P11が表示される位置からZ軸方向に所定の距離ZDだけ離れている位置に表示される。上述したように本実施形態の表示部12は、光を透過させる透過型表示部である。このため、表示部11に表示される画像P11と表示部12に表示される画像P12とは、重なるように観察者1によって視認される。ここで、所定の距離ZDとは、画像P11が表示されている位置と、画像P12が表示されている位置の間の距離である。
Next, a configuration in which the display device 10 according to the present embodiment displays the image P11 and the image P12 in association with each other will be described with reference to FIG.
FIG. 4 is a schematic diagram illustrating an example of an image displayed by the display device 10 according to the present embodiment.
In the present embodiment, the display unit 11 displays the image P11 so that the viewer 1 can visually recognize it. Moreover, the display part 12 displays the image P12 so that the observer 1 can visually recognize it. The image P12 is displayed at a position that is a predetermined distance ZD away from the position at which the image P11 is displayed in the Z-axis direction. As described above, the display unit 12 of the present embodiment is a transmissive display unit that transmits light. For this reason, the image P11 displayed on the display unit 11 and the image P12 displayed on the display unit 12 are visually recognized by the observer 1 so as to overlap. Here, the predetermined distance ZD is a distance between the position where the image P11 is displayed and the position where the image P12 is displayed.
 また、図4に示すように、表示装置10は、表示部12によって表示されている画像P12内の左辺エッジ部分E1と、このエッジ部分に対応している左辺エッジ画像PE1とが、対応して視認されるように画像P11および画像P12を表示する。同様に、表示装置10は、表示部12によって表示されている画像P12内の右辺エッジ部分E2と、このエッジ部分に対応している右辺エッジ画像PE2とが、対応して視認されるように画像P11および画像P12を表示する。
 このとき、例えば、表示装置10は、観察者1の左眼Lに、画像P12によって示される四角形の左辺エッジ部分E1の(-X)側(つまり、四角形の外側)に、左辺エッジ部分E1と左辺エッジ画像PE1とが重なって視認されるように各画像を表示する。
 また、表示装置10は、観察者1の左眼Lに、画像P12によって示される四角形の右辺エッジ部分E2の(-X)側(つまり、四角形の内側)に、右辺エッジ部分E2と右辺エッジ画像PE2とが重なって視認されるように各画像を表示する。同様に、例えば、表示装置10は、観察者1の右眼Rに、画像P12によって示される四角形の右辺エッジ部分E2の(+X)側(つまり、四角形の外側)に、右辺エッジ部分E2と右辺エッジ画像PE2とが重なって視認されるように各画像を表示する。また、表示装置10は、観察者1の右眼Rに、画像P12によって示される四角形の左辺エッジ部分E1の(+X)側(つまり、四角形の内側)に、左辺エッジ部分E1と左辺エッジ画像PE1が重なって視認されるように各画像を表示する。
As shown in FIG. 4, the display device 10 corresponds to a left side edge part E1 in the image P12 displayed by the display unit 12 and a left side edge image PE1 corresponding to the edge part. The images P11 and P12 are displayed so as to be visually recognized. Similarly, the display device 10 displays an image so that the right side edge portion E2 in the image P12 displayed by the display unit 12 and the right side edge image PE2 corresponding to the edge portion are visually recognized. P11 and image P12 are displayed.
At this time, for example, the display device 10 is connected to the left eye L of the observer 1 on the (−X) side of the left edge portion E1 of the quadrangle indicated by the image P12 (that is, outside the quadrilateral). Each image is displayed so that the left side edge image PE1 can be visually recognized.
Further, the display device 10 displays the right-side edge portion E2 and the right-side edge image on the left eye L of the observer 1 on the (−X) side of the right-side edge portion E2 of the square indicated by the image P12 (that is, on the inside of the square). Each image is displayed so that it overlaps with PE2. Similarly, for example, the display device 10 places the right side edge portion E2 and the right side on the right eye R of the observer 1 on the (+ X) side of the right side edge portion E2 of the square indicated by the image P12 (that is, outside the square). Each image is displayed so that the edge image PE2 overlaps and is visually recognized. In addition, the display device 10 has the left-side edge portion E1 and the left-side edge image PE1 on the right eye R of the observer 1 on the (+ X) side of the left-side edge portion E1 of the quadrangle indicated by the image P12 (that is, on the inner side of the quadrangle). Each image is displayed so that can be visually recognized.
 次に、観察者1によって、画像P11と画像P12とから立体像(3次元画像)が認識される仕組みについて説明する。
 まず、観察者1が画像P12と、画像P12のエッジ部分Eに対応するエッジ画像PEとを重なる位置で観察すると、画像P12とエッジ画像PEとの輝度比に合わせて表示面間内の奥行き位置(例えば、Z軸方向の位置)に像を知覚する。例えば、四角形のパターンを観察したとき、観察者1の網膜像上では認識できないくらいの微小な輝度の段差ができる。このような場合においては、明るさ(例えば、輝度)の段差間に仮想的なエッジを知覚して1つの物体として認識する。このとき、左眼Lと右眼Rとで少しだけ仮想的なエッジに、ずれが生じて両眼視差として知覚して奥行き位置が変化する。
 この仕組みについて、図5から図7を参照して、詳細に説明する。
Next, a mechanism in which the observer 1 recognizes a stereoscopic image (three-dimensional image) from the images P11 and P12 will be described.
First, when the observer 1 observes the image P12 and the edge image PE corresponding to the edge portion E of the image P12 at the overlapping position, the depth position within the display surface according to the luminance ratio between the image P12 and the edge image PE. An image is perceived at (for example, a position in the Z-axis direction). For example, when a rectangular pattern is observed, a step with such a small brightness that it cannot be recognized on the retina image of the observer 1 is formed. In such a case, a virtual edge is perceived between steps of brightness (for example, luminance) and recognized as one object. At this time, the left eye L and the right eye R slightly deviate from the virtual edges, and the depth position is changed as perceived as binocular parallax.
This mechanism will be described in detail with reference to FIGS.
 図5は、本実施形態における光学像IMの一例を示す模式図である。ここで、光学像IMとは、画像P11と画像P12とが観察者1によって視認される画像である。光学像IMには、観察者1の左眼Lに視認される光学像IMLと右眼Rに視認される光学像IMRとがある。
 まず、観察者1の左眼Lに視認される光学像IMLについて説明する。
 図5に示すように、観察者1の左眼Lにおいては、左眼Lに視認される画像P11Lと、左眼Lに視認される画像P12Lとが合成された光学像IMLが結像する。
 例えば、図4を参照して説明したように、左眼Lにおいては、画像P12によって示される四角形の左辺エッジ部分E1の(-X)側(つまり、四角形の外側)に、左辺エッジ部分E1を示す画像と左辺エッジ画像PE1とが合成された光学像IMLが結像する。また、左眼Lにおいては、画像P12によって示される四角形の右辺エッジ部分E2の(-X)側(つまり、四角形の内側)に、右辺エッジ部分E2を示す画像と右辺エッジ画像PE2とが合成された光学像IMLが結像する。
FIG. 5 is a schematic diagram illustrating an example of the optical image IM in the present embodiment. Here, the optical image IM is an image in which the image P11 and the image P12 are visually recognized by the observer 1. The optical image IM includes an optical image IML visually recognized by the left eye L of the observer 1 and an optical image IMR visually recognized by the right eye R.
First, the optical image IML visually recognized by the left eye L of the observer 1 will be described.
As shown in FIG. 5, in the left eye L of the observer 1, an optical image IML formed by combining the image P11L visually recognized by the left eye L and the image P12L visually recognized by the left eye L is formed.
For example, as described with reference to FIG. 4, in the left eye L, the left side edge portion E1 is placed on the (−X) side of the left side edge portion E1 of the quadrangle shown by the image P12 (that is, outside the quadrangle). An optical image IML is formed by combining the image shown and the left-side edge image PE1. In the left eye L, the image indicating the right edge portion E2 and the right edge image PE2 are synthesized on the (−X) side (that is, inside the rectangle) of the square right edge portion E2 indicated by the image P12. An optical image IML is formed.
 上述の図5の場合において、左眼Lに視認されている光学像IMLの明るさの分布を図6に示す。
 図6は、本実施形態における光学像IMの明るさの分布の一例を示すグラフである。図6において、X座標X~Xは、光学像IMLの明るさの変化点に対応するX座標である。
 図6に示すように、左眼Lに視認される画像P12Lの明るさは、X座標X~Xにおいて、ここではゼロとして説明する。また、画像P12Lの明るさは、X座標X~Xにおいて輝度値BR2である。なお、ここでは、画像の明るさの一例として、輝度値BRの場合について説明する。
 この光学像IMLの場合、左眼Lに視認される画像P11Lの明るさは、X座標X~X及びX座標X~Xにおいて輝度値BR1であり、X座標X~Xにおいてゼロである。したがって、左眼Lに視認される光学像IMLの明るさ(例えば、輝度)は、X座標X~Xにおいて輝度値BR1になる。また、光学像IMLの明るさは、X座標X~X及びX座標X~Xにおいて、輝度値BR2になり、X座標X~Xにおいて輝度値BR1と輝度値BR2とが合成された明るさである輝度値BR3になる。
FIG. 6 shows the brightness distribution of the optical image IML visually recognized by the left eye L in the case of FIG.
FIG. 6 is a graph showing an example of the brightness distribution of the optical image IM in the present embodiment. In FIG. 6, X coordinates X 1 to X 6 are X coordinates corresponding to the brightness change points of the optical image IML.
As shown in FIG. 6, the brightness of the image P12L visually recognized by the left eye L will be described as zero in the X coordinates X 1 to X 2 here. The brightness of the image P12L is the brightness value BR2 at the X coordinates X 2 to X 6 . Here, the case of the brightness value BR will be described as an example of the brightness of the image.
In the case of this optical image IML, the brightness of the image P11L visually recognized by the left eye L is the brightness value BR1 at the X coordinates X 1 to X 2 and the X coordinates X 4 to X 5 , and the X coordinates X 2 to X 4 Is zero. Therefore, the brightness (for example, luminance) of the optical image IML visually recognized by the left eye L becomes the luminance value BR1 at the X coordinates X 1 to X 2 . The brightness of the optical image IML is the luminance value BR2 at the X coordinates X 2 to X 4 and the X coordinates X 5 to X 6 , and the luminance value BR1 and the luminance value BR2 at the X coordinates X 4 to X 5 The luminance value BR3 which is the synthesized brightness is obtained.
 次に、観察者1の左眼Lにエッジ部分が視認される仕組みについて説明する。
 図7は、本実施形態における左眼Lと右眼Rとに生じる両眼視差の一例を示すグラフである。
 左眼Lの網膜上に結像された光学像IMLによって、観察者1に認識される画像の明るさの分布は、図7の波形WLのようになる。ここで、観察者1は、例えば、視認される画像の明るさの変化が最大になる(つまり、波形WLと波形WRとの傾きが最大になる)X軸上の位置を、視認している物体のエッジ部分であると認識する。本実施形態の場合、観察者1は、例えば、左眼L側の波形WLについて、図7に示すXELの位置(つまり、X軸の原点Oから距離LELの位置)を視認している四角形の左辺側のエッジ部分であると認識する。
Next, a mechanism in which the edge portion is visually recognized by the left eye L of the observer 1 will be described.
FIG. 7 is a graph showing an example of binocular parallax that occurs in the left eye L and right eye R in the present embodiment.
The distribution of brightness of the image recognized by the observer 1 by the optical image IML imaged on the retina of the left eye L is as shown by the waveform WL in FIG. Here, for example, the observer 1 visually recognizes the position on the X-axis where the change in the brightness of the visually recognized image is maximized (that is, the gradient between the waveform WL and the waveform WR is maximized). Recognize that it is an edge part of an object. In the case of the present embodiment, for example, the observer 1 visually recognizes the position of the X EL shown in FIG. 7 (that is, the position of the distance L EL from the origin O of the X axis) for the waveform WL on the left eye L side. It is recognized as an edge part on the left side of the rectangle.
 次に、観察者1の右眼Rに視認される光学像IMRについての、光学像IMLとの相違点を説明し、その相違点によって立体像(3次元画像)を認識する仕組みについて説明する。
 図5に示すように、観察者1の右眼Rにおいては、右眼Rに視認される画像P11Rと、右眼Rに視認される画像P12Rとが合成された光学像IMRが結像する。
 また、図6に示すように、右眼Rに視認される光学像IMRの明るさ(例えば、輝度)は、X座標X~X及びX座標X~Xにおいて、左眼Lに視認される光学像IMLの明るさと相違している。
 右眼Rの網膜上に合成された光学像IMRによって、観察者1に認識される画像の明るさの分布は、図7の波形WRのようになる。ここで、観察者1は、例えば、右眼R側の波形WRについて、図7に示すXERの位置(つまり、X軸の原点Oから距離LERの位置)を視認している四角形のエッジ部分であると認識する。
 これにより、観察者1は、左眼Lが視認する四角形のエッジ部分の位置XELと、右眼Rが視認する四角形のエッジ部分の位置XERとを両眼視差として認識する。そして、観察者1は、エッジ部分の両眼視差に基づいて四角形の画像を立体像(3次元画像)として認識する。
Next, a difference between the optical image IMR visually recognized by the right eye R of the observer 1 and the optical image IML will be described, and a mechanism for recognizing a stereoscopic image (three-dimensional image) based on the difference will be described.
As shown in FIG. 5, in the right eye R of the observer 1, an optical image IMR formed by combining the image P11R visually recognized by the right eye R and the image P12R visually recognized by the right eye R is formed.
Further, as shown in FIG. 6, the brightness (for example, luminance) of the optical image IMR visually recognized by the right eye R is applied to the left eye L in the X coordinates X 1 to X 3 and the X coordinates X 4 to X 6 . This is different from the brightness of the visually recognized optical image IML.
The distribution of brightness of the image recognized by the observer 1 by the optical image IMR synthesized on the retina of the right eye R is as shown by a waveform WR in FIG. Here, for example, the observer 1 visually recognizes the position of the XER shown in FIG. 7 (that is, the position of the distance LER from the origin O of the X axis) for the waveform WR on the right eye R side. Recognize that it is a part.
Thus, the observer 1 recognizes the position X EL of the edge portion of the square left eye L is viewing, and a position X ER of the edge portion of the square right eye R is visually recognized as binocular parallax. Then, the observer 1 recognizes the quadrilateral image as a stereoscopic image (three-dimensional image) based on the binocular parallax at the edge portion.
 以上、説明したように、本実施形態の表示装置10は、画像P12を表示する表示部12と、画像P12が表示される奥行き位置とは異なる奥行き位置に、画像P12内のエッジ部分を示すエッジ画像PEを含む画像P11を表示する表示部11とを備えている。これにより、表示装置10は、画像P12と画像P12のエッジ画像PE(つまり、エッジ部分)とを重ねて表示することができる。つまり、本実施形態の表示装置10は、表示部12に表示されているエッジ部以外の画像には、表示部11に表示されている画像(つまり、エッジ画像PE)の影響を与えることなく、画像を表示することができる。 As described above, the display device 10 according to this embodiment includes the display unit 12 that displays the image P12 and the edge that indicates the edge portion in the image P12 at a depth position different from the depth position where the image P12 is displayed. And a display unit 11 that displays an image P11 including the image PE. Accordingly, the display device 10 can display the image P12 and the edge image PE (that is, the edge portion) of the image P12 in an overlapping manner. That is, the display device 10 of the present embodiment has no influence on the image other than the edge portion displayed on the display unit 12 without the influence of the image displayed on the display unit 11 (that is, the edge image PE). An image can be displayed.
 ここで、図8を参照して、表示装置10に表示する2つの画像のエッジの位置がずれた場合の影響について説明する。
 図8は、表示装置10に表示する2つの画像のエッジの位置がずれた場合の影響を説明する図である。
 表示装置10における表示部11と表示部12とは、立体画像(3D画像)表示が可能な表示ディスプレイをそれぞれ有している。それぞれの表示ディスプレイには、2次元に配列された画素からなる画素アレイが表示面にそれぞれ設けられている。表示部11が表示する画像P11において、表示面11Sにおける各位置の輝度の調整は、表示部11の画素アレイにおける画素を単位として行われる。例えば、表示部12が表示する画像P12において、表示面12Sにおける各位置の輝度の調整は、表示部12の画素アレイにおける画素を単位で行われる。
Here, with reference to FIG. 8, the influence when the positions of the edges of the two images displayed on the display device 10 are shifted will be described.
FIG. 8 is a diagram for explaining the influence when the positions of the edges of two images displayed on the display device 10 are shifted.
The display unit 11 and the display unit 12 in the display device 10 each have a display capable of displaying a stereoscopic image (3D image). Each display display is provided with a pixel array composed of two-dimensionally arranged pixels on the display surface. In the image P11 displayed on the display unit 11, the luminance at each position on the display surface 11S is adjusted in units of pixels in the pixel array of the display unit 11. For example, in the image P <b> 12 displayed by the display unit 12, the luminance of each position on the display surface 12 </ b> S is adjusted in units of pixels in the pixel array of the display unit 12.
 表示装置に高精細な画像を表示しようとしても、表示可能な像の精細度が表示装置の解像度によって制限される場合がある。本実施形態における表示装置10においては、上記の場合と同様に、表示部11に表示するエッジ画像PEによって微細なエッジを表示しようとしても、エッジの幅(線の幅)は、画素を単位として調整することしかできず、画素の大きさによって制限される。
 また、例えば、表示部11に表示するエッジ画像PEのエッジを強調するために、そのエッジの幅を画素の大きさ(幅)より太くして表示する場合がある。この場合、複数の画素をエッジの幅の方向に連ねて並べてエッジの幅を太くして表示することができるとしても、エッジの幅は、画素の大きさによって制限される。
 以下の説明においては、エッジの幅を1画素の幅に対応させて説明するが、複数の画素によって形成されるエッジを表示させる場合について適用することができる。なお、画素の大きさによる説明を、画素のピッチによる説明に置き換えてもよい。
Even if an attempt is made to display a high-definition image on the display device, the definition of the displayable image may be limited by the resolution of the display device. In the display device 10 according to the present embodiment, as in the case described above, even if a fine edge is displayed by the edge image PE displayed on the display unit 11, the edge width (line width) is in units of pixels. It can only be adjusted and is limited by the size of the pixel.
Further, for example, in order to emphasize the edge of the edge image PE displayed on the display unit 11, the width of the edge may be displayed larger than the size (width) of the pixel. In this case, even if a plurality of pixels can be arranged side by side in the edge width direction and displayed with a thick edge width, the edge width is limited by the size of the pixel.
In the following description, the edge width corresponds to the width of one pixel. However, the present invention can be applied to the case where an edge formed by a plurality of pixels is displayed. The description based on the pixel size may be replaced with the description based on the pixel pitch.
 図8(a)を参照して、画像P12によって示される四角形Obの右辺エッジ部分E2の画像について説明する。
 図8(a)において上段側から順に並べられた図a1から図a4は以下を示す。
 図a1は、表示部11の表示面11Sを視認する側から正面視した図を示す。この表示面11Sには、表示部11に表示する画像P11によって示される右辺エッジ画像PE2が表示された状態を示す。図に示される格子は、各画素の位置を示す。格子に対応する位置には各画素が配置されている。例えば、この図においては右辺エッジ画像PE2がX軸におけるk列目の画素に表示されている。なお、Z軸方向に、・・・(k-1)列、k列、(k+1)列・・・が示されている。
 図a2は、表示部12の表示面12S側から正面視した図に、表示部12に表示する画像P12によって示される四角形Obが表示された状態を示す。四角形Obの右端ORが、右辺エッジ画像PE2の右端に一致する位置にある状態を示す。換言すれば、四角形Obの右端ORが、図a1に示す右辺エッジ画像PE2を表示するk列目の画素と(k+1)列目の画素との境界に対応する位置にある。要するに、四角形Obの右辺エッジ部分E2が、図a1のk列目に示されている右辺エッジ画像PE2と対応する位置にある。
 図a3は、図a1に示された右辺エッジ画像PE2と図a2に示された四角形Obの右辺エッジ部分E2とが重なるような状態で視認できる画像(合成画像)を示す。
 図a4は、図a3に示す画像(合成画像)において、四角形Obを含むように水平方向(X軸方向)に切り出した部分の明るさ(輝度)を示す。例えば、輪郭補正前の四角形Obを示す部分の輝度(明るさ)(V0)が、合成されたことにより輝度(明るさ)Vpまで強調された輪郭が示されている。
With reference to Fig.8 (a), the image of the right side edge part E2 of the square Ob shown by the image P12 is demonstrated.
In FIG. 8A, FIGS. A1 to a4 arranged in order from the upper side show the following.
FIG. A1 shows a front view of the display unit 11 from the side of viewing the display surface 11S. This display surface 11S shows a state in which the right side edge image PE2 indicated by the image P11 displayed on the display unit 11 is displayed. The grid shown in the figure indicates the position of each pixel. Each pixel is arranged at a position corresponding to the grid. For example, in this figure, the right side edge image PE2 is displayed on the pixel in the kth column on the X axis. In the Z-axis direction,... (K−1) columns, k columns, (k + 1) columns.
FIG. A2 shows a state in which a quadrangle Ob indicated by an image P12 displayed on the display unit 12 is displayed on the front view of the display unit 12 from the display surface 12S side. A state in which the right end OR of the quadrangle Ob is in a position that coincides with the right end of the right side edge image PE2 is shown. In other words, the right end OR of the quadrangle Ob is at a position corresponding to the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 shown in FIG. In short, the right edge portion E2 of the quadrangle Ob is at a position corresponding to the right edge image PE2 shown in the k-th column in FIG.
FIG. A3 shows an image (composite image) that can be viewed in a state where the right edge image PE2 shown in FIG. A1 and the right edge portion E2 of the quadrangle Ob shown in FIG. A2 overlap.
FIG. A4 shows the brightness (luminance) of the part cut out in the horizontal direction (X-axis direction) so as to include the quadrangle Ob in the image (composite image) shown in FIG. A3. For example, an outline is shown in which the luminance (brightness) (V0) of the portion indicating the rectangle Ob before the outline correction is enhanced up to the luminance (brightness) Vp by being synthesized.
 図8(a)において、画像P12によって示される四角形Obの右辺エッジ部分E2において、四角形Obの右端ORより(-X)側(つまり、四角形の内側)であって、四角形Obの右端ORに右辺エッジ画像PE2の右端が接するように表示されている場合の画像が示されている。上記のように四角形Obの右端ORに右辺エッジ画像PE2の右端が接するように表示されていることにより、四角形Obの右辺エッジ部分E2における画像の輝度の変化が大きく、かつ、急峻に変化した画像を合成することができる。観察者1は、輝度のピーク値がVp(図a4)まで高められた画像を視認できる。
 このように合成された画像は、観察者1の視認位置が輪郭を最も強調できる位置にある場合に視認できる。本実施形態においては、合成された画像を視認できる視認位置を、観察者1が立体画像を視認できる所定の位置とする。
In FIG. 8A, in the right side edge portion E2 of the quadrangle Ob shown by the image P12, the right side of the right end OR of the quadrangle Ob is on the (−X) side (that is, the inside of the quadrangle). An image when the right end of the edge image PE2 is displayed so as to touch is shown. Since the right edge of the right-side edge image PE2 is in contact with the right edge OR of the quadrangle Ob as described above, the change in the luminance of the image at the right-side edge portion E2 of the quadrangle Ob is large and sharply changed. Can be synthesized. The observer 1 can visually recognize an image whose luminance peak value is increased to Vp (FIG. A4).
The image synthesized in this way can be visually recognized when the viewing position of the observer 1 is at a position where the contour can be emphasized most. In the present embodiment, the visual recognition position where the synthesized image can be visually recognized is set as a predetermined position where the observer 1 can visually recognize the stereoscopic image.
 一方、観察者1が立体画像を視認できる所定の位置から移動した場合に、上記図8(a)に示す画像を観察できなくなる。例えば、観察者1がX軸にそって(-X)軸方向に移動した場合を図8(b)に示し、観察者1がX軸にそって(+X)軸方向に移動した場合を図8(c)に示す。図8(b)に示す図b1から図b4と、図8(c)に示す図c1から図c4とは、前述の図a1から図a4にそれぞれ対応する。以下、図8(b)と図8(c)とについて、図8(a)との差分を中心に順に説明する。 On the other hand, when the observer 1 moves from a predetermined position where the stereoscopic image can be viewed, the image shown in FIG. 8A cannot be observed. For example, FIG. 8B shows a case where the observer 1 moves in the (−X) axis direction along the X axis, and FIG. 8B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis. This is shown in FIG. FIGS. B1 to b4 shown in FIG. 8B and FIGS. C1 to c4 shown in FIG. 8C correspond to the above-described FIGS. A1 to a4, respectively. Hereinafter, FIG. 8B and FIG. 8C will be described in order focusing on differences from FIG. 8A.
 まず、図8(b)において、視認できる画像について説明する。
 図b2において、四角形Obの右端ORが、図b1に示す右辺エッジ画像PE2を表示するk列目の画素と(k+1)列目の画素との境界より、(+X)軸方向に移動して観察される位置にある。要するに、四角形Obの右端ORが、右辺エッジ画像PE2の右端より(+X)軸方向に移動して観察される位置にある。
 上記のように四角形Obの右端ORが、右辺エッジ画像PE2の右端より(+X)軸方向に移動した位置に表示されていることにより、四角形Obの右辺エッジ部分E2におけるエッジ画像PE2の位置が四角形Obの内側に移動した画像が合成される。
 この合成された画像では、四角形Obの右辺エッジ部分E2において、右辺エッジ画像PE2によって強調された輪郭画像の輝度と、X軸方向の幅とが前述の図8(a)と同様の画像が合成されている。ただし、同合成された画像において、四角形Obの右辺エッジ部分E2における画像の輝度が階段状に変化するようになる。このように輝度が階段状に変化する画像では、付加した右辺エッジ画像PE2による輪郭画像としての強調量が、前述の図8(a)の場合に比べ低減する。
First, an image that can be visually recognized will be described with reference to FIG.
In FIG. B2, observation is performed by moving the right end OR of the quadrangle Ob in the (+ X) axis direction from the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 illustrated in FIG. Is in the position to be. In short, the right end OR of the quadrangle Ob is at a position where it is observed by moving in the (+ X) axis direction from the right end of the right side edge image PE2.
As described above, the right end OR of the quadrangle Ob is displayed at a position moved in the (+ X) axial direction from the right end of the right side edge image PE2, so that the position of the edge image PE2 in the right side edge portion E2 of the quadrangle Ob is a quadrangle. The image moved inside Ob is synthesized.
In this synthesized image, an image similar to that in FIG. 8A described above is synthesized with the brightness of the contour image emphasized by the right edge image PE2 and the width in the X-axis direction at the right edge portion E2 of the quadrangle Ob. Has been. However, in the synthesized image, the luminance of the image at the right edge portion E2 of the quadrangle Ob changes in a staircase pattern. Thus, in an image whose luminance changes stepwise, the amount of enhancement as a contour image by the added right-side edge image PE2 is reduced compared to the case of FIG. 8A described above.
 次に、図8(c)において、視認できる画像について説明する。
 図c2において、四角形Obの右端ORが、図c1に示す右辺エッジ画像PE2を表示するk列目の画素と(k+1)列目の画素との境界より、(-X)軸方向に移動して観察される位置にある。要するに、四角形Obの右端ORが、右辺エッジ画像PE2の右端より(-X)軸方向に移動して観察される位置にある。
 上記のように四角形Obの右端ORが、右辺エッジ画像PE2の右端より(-X)軸方向に移動した位置に表示されていることにより、四角形Obの右辺エッジ部分E2におけるエッジ画像PE2の位置が四角形Obの外側方向に移動して、四角形Obの形状よりはみ出した画像が合成される。
 この合成された画像では、四角形Obの右辺エッジ部分E2において、右辺エッジ画像PE2によって強調された輪郭画像の輝度が前述の図8(a)と同様の画像が合成されている。ただし、同合成された画像において、前述の輝度が確保された範囲のX軸方向の幅が狭くなり、四角形Obの右辺エッジ部分E2における画像の輝度が階段状に変化するようになる。このように、輝度が階段状に変化する画像では、付加した右辺エッジ画像PE2による輪郭画像としての強調量が、前述の図8(a)の場合に比べ低減する。また、前述の図8(a)、(b)に対して輪郭画像の輝度が確保された範囲のX軸方向の幅が狭くなることから、図8(a)、(b)と同じ右辺エッジ画像PE2を表示しているにもかかわらず、輪郭画像としての強調量が前述の図8(a)、(b)の場合に比べ低減する。
Next, an image that can be visually recognized in FIG. 8C will be described.
In FIG. C2, the right end OR of the rectangle Ob moves in the (−X) axis direction from the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 shown in FIG. In the position to be observed. In short, the right end OR of the quadrangle Ob is at a position where it is observed by moving in the (−X) axis direction from the right end of the right side edge image PE2.
As described above, the right end OR of the quadrangle Ob is displayed at a position moved in the (−X) axis direction from the right end of the right side edge image PE2, so that the position of the edge image PE2 in the right side edge portion E2 of the quadrangle Ob is determined. An image that moves in the outward direction of the rectangle Ob and protrudes from the shape of the rectangle Ob is synthesized.
In the synthesized image, an image having the same brightness as that of FIG. 8A described above is synthesized at the right edge portion E2 of the quadrangle Ob with the brightness of the contour image emphasized by the right edge image PE2. However, in the synthesized image, the width in the X-axis direction of the above-described range in which the luminance is ensured is narrowed, and the luminance of the image at the right edge portion E2 of the quadrangle Ob changes in a staircase pattern. As described above, in an image whose luminance changes stepwise, the amount of enhancement as a contour image by the added right-side edge image PE2 is reduced compared to the case of FIG. 8A described above. In addition, since the width in the X-axis direction of the range in which the brightness of the contour image is ensured is narrower than in FIGS. 8A and 8B described above, the same right edge as in FIGS. 8A and 8B. Although the image PE2 is displayed, the enhancement amount as the contour image is reduced as compared with the case of FIGS. 8A and 8B described above.
 以上に示すように、観察者1が立体画像を視認できる所定の位置から移動した場合に、上記図8(a)に示す画像を視認できなくなり、立体画像の視認性が低下することがある。なお、上記の立体画像の視認性が低下する場合の要因は、表示装置10の画素が離散的に配置されていることによって生じるエイリアシングによって、表示する画像が劣化する場合の要因と異なる。 As described above, when the observer 1 moves from a predetermined position where a stereoscopic image can be viewed, the image shown in FIG. 8A cannot be viewed, and the visibility of the stereoscopic image may be reduced. In addition, the factor in the case where the visibility of the stereoscopic image is reduced is different from the factor in the case where the displayed image is deteriorated due to aliasing caused by the pixels of the display device 10 being discretely arranged.
 そこで、本実施形態における表示システム100においては、以下に説明する方法に従ってエッジ画像PEを調整することにより、立体画像の視認性を向上させる。以下、本実施形態における表示システム100について、詳細に説明する。 Therefore, in the display system 100 according to the present embodiment, the visibility of the stereoscopic image is improved by adjusting the edge image PE according to the method described below. Hereinafter, the display system 100 in the present embodiment will be described in detail.
<表示システムの構成>
 次に、図9を参照して、表示システム100の構成について説明する。
 図9は、本発明の一実施形態による表示システム100の構成を示す概略ブロック図である。図9に示す表示システム100は、画像処理装置2と表示装置10とを備える。
<Configuration of display system>
Next, the configuration of the display system 100 will be described with reference to FIG.
FIG. 9 is a schematic block diagram showing the configuration of the display system 100 according to an embodiment of the present invention. A display system 100 illustrated in FIG. 9 includes an image processing device 2 and a display device 10.
 表示装置10は、立体画像を所定の視認位置から立体視可能なように表示する立体画像(3D画像)表示が可能な表示ディスプレイを有している。ここで、立体画像(3D画像)とは、3D映像(3D動画像)、及び3D静止画像の何れであってもよい。また、立体画像(3D画像)は、実際の被写体を撮影した自然画像と、ソフトウエア処理によって生成・加工・合成された画像(コンピュータグラフィクス(CG)画像等)とのうち何れの画像であってもよい。この表示ディスプレイは、例えば、液晶ディスプレイ、有機EL(Electro-Luminescence)ディスプレイ、プラズマディスプレイ等であって、且つ上述したような立体画像表示が可能なディスプレイである。要するに、表示装置10の表示ディスプレイは、それぞれが2次元に配列された画素からなる画素アレイを表示面に備えている。 The display device 10 has a display capable of displaying a stereoscopic image (3D image) for displaying a stereoscopic image so as to be stereoscopically viewed from a predetermined viewing position. Here, the stereoscopic image (3D image) may be any of 3D video (3D moving image) and 3D still image. A stereoscopic image (3D image) is any image of a natural image obtained by photographing an actual subject and an image (computer graphics (CG) image or the like) generated, processed, or synthesized by software processing. Also good. This display is, for example, a liquid crystal display, an organic EL (Electro-Luminescence) display, a plasma display, etc., and is a display capable of displaying a stereoscopic image as described above. In short, the display display of the display device 10 includes a pixel array including pixels arranged two-dimensionally on the display surface.
 画像処理装置2は、輪郭補正部210、撮像部230、検出部250、制御部260、及び記憶部270を備えている。 The image processing apparatus 2 includes a contour correction unit 210, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
 輪郭補正部210は、輪郭補正部210に供給される画像情報に応じて、供給される画像情報のうち少なくとも何れかの画像情報を補正して出力する。例えば、輪郭補正部210が行う処理の対象とする画像情報には、画像情報D11P(第1の画像情報)と、画像情報D12P(第2の画像情報)とがある。画像情報D11Pは、表示部11(第1表示部)と表示部12(第2表示部)とに表示する表示対象を両眼視差により所定の位置に立体表示する画像情報のうち表示部11に表示させる画像情報である。また、画像情報D12Pは、上記の表示対象を両眼視差により所定の位置に立体表示する画像情報のうち表示部12に表示させる画像情報である。 The contour correction unit 210 corrects and outputs at least one of the supplied image information according to the image information supplied to the contour correction unit 210. For example, the image information to be processed by the contour correcting unit 210 includes image information D11P (first image information) and image information D12P (second image information). The image information D11P is displayed on the display unit 11 out of image information for stereoscopically displaying a display target to be displayed on the display unit 11 (first display unit) and the display unit 12 (second display unit) at a predetermined position by binocular parallax. This is image information to be displayed. Further, the image information D12P is image information to be displayed on the display unit 12 among the image information for stereoscopically displaying the display target at a predetermined position by binocular parallax.
 なお、本実施形態における輪郭補正部210は、上記の画像情報D11P(第1の画像情報)を補正して画像情報D11を生成する。また、輪郭補正部210は、上記の画像情報D12P(第2の画像情報)を補正せずに、そのまま画像情報D12として出力する。 Note that the contour correction unit 210 in the present embodiment corrects the image information D11P (first image information) to generate image information D11. The contour correction unit 210 outputs the image information D12P as it is without correcting the image information D12P (second image information).
 本実施形態における輪郭補正部210は、表示部11と表示部12とに表示する表示対象を両眼視差により立体表示する所定の位置(Ma(図1))と、表示部11が有する複数の画素の位置(例えば、2次元に配列された画素の位置)と、表示対象の輪郭部(例えば、エッジ部E)に対応する表示部11における画素位置と、に基づいて、画像情報D11Pのうち表示対象の輪郭部に対応する画像情報を補正する。
 ここで、表示部11と表示部12とに表示する表示対象を両眼視差により立体表示する所定の位置は、観察者1が立体画像を視認することができる位置である。表示部11が有する2次元に配列された複数の画素の位置は、表示面11Sに設けられた表示ディスプレイが備える2次元に配列された画素からなる画素アレイにおける画素の位置である。表示対象の輪郭部に対応する表示部11における画素位置は、表示対象の輪郭部を表示部11において表示する位置である。
The contour correction unit 210 according to the present embodiment includes a predetermined position (Ma (FIG. 1)) for stereoscopic display of a display target to be displayed on the display unit 11 and the display unit 12 by binocular parallax, and a plurality of the display unit 11 has. Based on the position of the pixel (for example, the position of the pixel arrayed two-dimensionally) and the pixel position on the display unit 11 corresponding to the contour part (for example, the edge part E) to be displayed, The image information corresponding to the contour portion to be displayed is corrected.
Here, the predetermined position where the display target to be displayed on the display unit 11 and the display unit 12 is stereoscopically displayed by binocular parallax is a position where the observer 1 can visually recognize a stereoscopic image. The positions of the plurality of pixels arrayed two-dimensionally included in the display unit 11 are the positions of the pixels in the pixel array including the pixels arrayed two-dimensionally provided in the display provided on the display surface 11S. The pixel position on the display unit 11 corresponding to the display target contour is a position at which the display target contour is displayed on the display unit 11.
 輪郭補正部210は、上記の画像情報D11Pを補正する際に、画像情報D12P(第2の画像情報)に応じて表示対象の輪郭部に対応する画像情報を補正する。上記のとおり画像情報D12Pは、表示対象を両眼視差により所定の位置に立体表示する画像情報のうち表示部12に表示させる画像情報である。 The contour correcting unit 210 corrects the image information corresponding to the contour portion to be displayed according to the image information D12P (second image information) when correcting the image information D11P. As described above, the image information D12P is image information to be displayed on the display unit 12 among image information for stereoscopically displaying a display target at a predetermined position by binocular parallax.
 輪郭補正部210は、表示対象の輪郭部において、表示対象の輪郭部に対応する表示部11における画素位置と、表示部11が有する2次元に配列された複数の画素の位置との相対的な位置関係によって生じる立体表示の視認性の低下を減じるように、画像情報D11Pのうち表示対象の輪郭部に対応する画像情報を補正する。例えば、輪郭補正部210は、表示対象の輪郭部に含まれる画像情報D11Pの補正において表示対象の輪郭部の輝度を補正する。
 また、例えば、輪郭補正部210は、表示対象の輪郭部に含まれる画像情報D11Pの補正において表示対象の輪郭部の幅を補正する。
 表示対象の輪郭部において、立体表示の視認性の低下を減じるように表示対象の輪郭部に対応する画像情報を補正する処理の詳細については、後述する。
The contour correction unit 210 has a relative relationship between the pixel position in the display unit 11 corresponding to the display target contour unit and the positions of the plurality of pixels arranged in the display unit 11 in the display target contour unit. Image information corresponding to the contour portion to be displayed in the image information D11P is corrected so as to reduce the reduction in the visibility of the stereoscopic display caused by the positional relationship. For example, the contour correction unit 210 corrects the luminance of the contour portion to be displayed in the correction of the image information D11P included in the contour portion to be displayed.
Further, for example, the contour correcting unit 210 corrects the width of the contour portion to be displayed in the correction of the image information D11P included in the contour portion to be displayed.
Details of the process of correcting the image information corresponding to the contour portion of the display target in the contour portion of the display target will be described later so as to reduce the reduction in the visibility of the stereoscopic display.
 本実施形態における輪郭補正部210について、より具体的な例を挙げて説明する。
 本実施形態における輪郭補正部210は、判定部213と、補正部211とを備える。
 判定部213は、後述の補正部211における補正処理を制御する条件を定めるための判定を行う。判定部213は、例えば、表示対象の輪郭部に含まれる画像情報D11Pの補正において、表示部11に表示する表示対象の輪郭部の位置が、表示部11の画素のうち隣り合う第1画素と第2画素との範囲に掛かるか否かを判定する。例えば、判定部213は、この判定結果から、表示部11に表示する表示対象の輪郭部の位置が、表示部11の画素のうち隣り合う第1画素と第2画素との範囲に掛かる場合に補正が必要と判定し、表示部11の画素のうち隣り合う第1画素と第2画素との範囲に掛からない場合に補正が不要と判定する。隣り合う第1画素と第2画素とは、立体視差が生じる方向(水平方向)に並べて配置される画素である。
The contour correction unit 210 in the present embodiment will be described with a more specific example.
The contour correction unit 210 in the present embodiment includes a determination unit 213 and a correction unit 211.
The determination unit 213 performs determination for determining a condition for controlling correction processing in the correction unit 211 described later. For example, in the correction of the image information D11P included in the contour portion to be displayed, the determination unit 213 has the position of the contour portion to be displayed on the display portion 11 that is adjacent to the first pixel adjacent to the display portion 11. It is determined whether or not it falls within the range of the second pixel. For example, the determination unit 213 determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11 falls within the range of the first pixel and the second pixel adjacent to each other among the pixels of the display unit 11. It is determined that correction is necessary, and it is determined that correction is not necessary when the range of the adjacent first pixel and second pixel among the pixels of the display unit 11 is not applied. The adjacent first pixel and second pixel are pixels arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
 補正部211は、判定部213における上記の判定結果により、第1画素と第2画素との範囲に表示対象の輪郭部の位置が掛かると判定される場合に、表示装置10に表示する表示対象の輪郭部の補正を行う。補正部211は、補正を行う際に、例えば、第1画素と第2画素とのうち少なくとも何れかの画素に表示する画像情報D11Pを、所定の位置と、表示部11が有する2次元に配列された複数の画素の位置と、表示対象の輪郭部に対応する表示部11における画素位置とに基づいて補正する。 The correction unit 211 displays a display target to be displayed on the display device 10 when it is determined that the position of the contour of the display target is within the range of the first pixel and the second pixel based on the determination result in the determination unit 213. Correction of the contour portion of. When the correction unit 211 performs correction, for example, the image information D11P to be displayed on at least one of the first pixel and the second pixel is arranged in a two-dimensional manner with a predetermined position and the display unit 11 The correction is performed based on the positions of the plurality of pixels and the pixel positions on the display unit 11 corresponding to the contour portion to be displayed.
 補正部211は、第1画素と第2画素とに表示する画像情報D11Pを、第1画素と第2画素とを対にして定める画像情報D11Pの補正量に応じて補正する。これにより、輪郭補正部210は、表示部11と表示部12とを見る観察者1(ユーザ)の位置が、両眼視差により表示対象を立体視できる所定の位置となるように画像情報D11Pを補正することができる。 The correction unit 211 corrects the image information D11P displayed on the first pixel and the second pixel according to the correction amount of the image information D11P determined by pairing the first pixel and the second pixel. Thereby, the contour correction unit 210 sets the image information D11P so that the position of the observer 1 (user) who views the display unit 11 and the display unit 12 is a predetermined position where the display target can be stereoscopically viewed by binocular parallax. It can be corrected.
 撮像部230は、表示装置10が表示する表示面側の、上述の視認位置を含む方向を撮像する。すなわち、撮像部230は、表示装置10の表示面に対面する方向を撮像する。
 例えば、撮像部230は、不図示であるが、光学レンズと、光学レンズを介して入力される被写体光束(光学像)を撮像する撮像素子と、その撮像素子に撮像された撮像データをデジタル画像データとして出力する撮像信号処理部と、を有している。そして、撮像部230は、撮像した画像(デジタル画像データ)を、検出部250と制御部260とに供給する。
The imaging unit 230 images the direction including the above-described viewing position on the display surface side displayed by the display device 10. That is, the imaging unit 230 captures the direction facing the display surface of the display device 10.
For example, the imaging unit 230 is not shown, but an optical lens, an imaging device that captures a subject light beam (optical image) input via the optical lens, and imaging data captured by the imaging device are digital images. An imaging signal processing unit that outputs data. Then, the imaging unit 230 supplies the captured image (digital image data) to the detection unit 250 and the control unit 260.
 検出部250は、観察者検出部251(検出部)、顔検出部252、及び、顔認識部253を備えている。
 観察者検出部251は、撮像部230により撮像された画像に基づいて、表示装置10が表示する表示面側にいる観察者の表示装置10に対する相対位置を検出する。すなわち、観察者検出部251は、撮像部230により撮像された画像に基づいて、表示装置10に対面する方向にいる観察者の表示装置10に対する相対位置を検出する。例えば、観察者検出部251は、撮像部230により撮像された画像領域に対する顔検出部252により検出された観察者の顔の画像領域の位置に基づいて、表示装置10の表示面に平行な面(XY平面)における観察者の位置(X軸及びY軸の座標)を検出する。
The detection unit 250 includes an observer detection unit 251 (detection unit), a face detection unit 252, and a face recognition unit 253.
The observer detection unit 251 detects the relative position of the observer on the display surface side displayed by the display device 10 with respect to the display device 10 based on the image captured by the imaging unit 230. That is, the observer detection unit 251 detects the relative position of the observer with respect to the display device 10 in the direction facing the display device 10 based on the image captured by the imaging unit 230. For example, the observer detection unit 251 is a surface parallel to the display surface of the display device 10 based on the position of the image region of the observer's face detected by the face detection unit 252 with respect to the image region captured by the imaging unit 230. An observer's position (X-axis and Y-axis coordinates) in (XY plane) is detected.
 また、観察者検出部251は、顔検出部252により検出された観察者の顔の画像領域に基づいて、表示装置10から観察者までの表示装置10の表示面に対する距離(Z軸方向の距離、Z軸の座標)を検出する。例えば、観察者検出部251は、顔検出部252により検出された観察者の顔の画像領域の大きさ、又は顔の画像領域の中の左眼の画像領域と右眼の画像領域との間隔に基づいて、表示装置10から観察者までの表示装置10の表示面に対する距離(Z軸方向の距離、Z軸の座標)を検出する。なお、観察者検出部251は、撮像部230により撮像された画像から、顔以外の人間の特徴(例えば、身体の形状の特徴等)に基づいて検出された観察者を検出してもよい。 The observer detection unit 251 also determines the distance (the distance in the Z-axis direction) from the display device 10 to the observer with respect to the display surface of the display device 10 based on the image area of the observer's face detected by the face detection unit 252. , Z-axis coordinates). For example, the observer detection unit 251 determines the size of the image area of the observer's face detected by the face detection unit 252 or the interval between the left eye image area and the right eye image area in the face image area. Based on the above, the distance (distance in the Z-axis direction, coordinates in the Z-axis) to the display surface of the display device 10 from the display device 10 to the observer is detected. Note that the observer detection unit 251 may detect an observer detected from an image captured by the imaging unit 230 based on human characteristics other than the face (for example, body shape characteristics).
 顔検出部252は、撮像部230により撮像された画像から顔の画像領域を検出する。
 例えば、顔検出部252は、撮像部230により撮像された画像から顔の輪郭や眼の位置を示す画像情報を抽出するとともに、この抽出した画像情報と、人の顔の特徴を示す情報(顔の輪郭や眼の特徴を示す情報)とに基づいて、表示装置10に対面する方向の顔、すなわち観察者の顔の画像領域を検出する。なお、以下の記述において、「顔の画像領域を検出する」ことを、単に「観察者の顔を検出する」とも記述する。ここで、上述の人の顔の特徴を示す情報は、例えば、画像から顔を検出するために用いられる顔検出データベースとして、記憶部270に記憶されている。そして、顔検出部252は検出結果を観察者検出部251、又は顔認識部253に供給する。
The face detection unit 252 detects a face image area from the image captured by the imaging unit 230.
For example, the face detection unit 252 extracts image information indicating the outline of the face and the position of the eyes from the image captured by the imaging unit 230, and the extracted image information and information indicating the characteristics of the human face (face And the image area of the face in the direction facing the display device 10, that is, the face of the observer is detected. In the following description, “detecting a face image area” is also simply referred to as “detecting an observer's face”. Here, the above-mentioned information indicating the characteristics of the human face is stored in the storage unit 270 as a face detection database used for detecting a face from an image, for example. Then, the face detection unit 252 supplies the detection result to the observer detection unit 251 or the face recognition unit 253.
 なお、顔検出部252は、撮像部230により撮像された画像から顔を検出できなかった場合、表示装置10に対面する方向には顔が無い、すなわち観察者がいないことを示す検出結果を観察者検出部251、又は顔認識部253に供給する。 If the face detection unit 252 fails to detect a face from the image captured by the imaging unit 230, the face detection unit 252 observes a detection result indicating that there is no face in the direction facing the display device 10, that is, there is no observer. To the person detection unit 251 or the face recognition unit 253.
 顔認識部253は、撮像部230により撮像された画像に基づいて、顔検出部252により検出された観察者の顔認識をする。例えば、顔認識部253は、顔検出部252の検出結果(顔の輪郭や眼の位置等を示す情報)と、複数の人それぞれの顔の特徴を示す情報とに基づいて、検出された顔が何れの観察者の顔であるかを認識する。 The face recognition unit 253 recognizes the face of the observer detected by the face detection unit 252 based on the image captured by the imaging unit 230. For example, the face recognition unit 253 detects the detected face based on the detection result of the face detection unit 252 (information indicating the outline of the face, the position of the eyes, etc.) and the information indicating the characteristics of the faces of a plurality of people. Which observer's face is recognized.
 ここで、上述の複数の人それぞれの顔の特徴を示す情報は、例えば、画像から抽出した顔を顔認識するために用いられる顔認識データベースとして、記憶部270に記憶されている。 Here, the information indicating the facial features of each of the plurality of persons described above is stored in the storage unit 270 as a face recognition database used to recognize a face extracted from an image, for example.
 なお、顔認識データベースには、任意に情報を追加できる構成であってもよい。例えば、表示システム100は、観察者の顔認識用の特徴を示す情報を登録するメニューを設け、このメニューを実行することにより、登録したい観察者の顔を撮像部230により撮像するとともに、撮像した顔の画像に基づいて登録したい観察者の顔の特徴を示す顔認識用の情報を生成して、顔認識データベースに登録してもよい。 Note that the face recognition database may be configured to arbitrarily add information. For example, the display system 100 is provided with a menu for registering information indicating the features for recognizing the observer's face, and by executing this menu, the imaging unit 230 images the face of the observer to be registered, and the image is captured. Information for recognizing the face indicating the characteristics of the face of the observer to be registered based on the face image may be generated and registered in the face recognition database.
 記憶部270は、検出部250が検出するために必要な情報を記憶する。例えば、記憶部270は、顔検出データベースとして、画像から顔を検出するために必要な、人の顔の特徴を示す情報を記憶する。また、記憶部270は、顔認識データベースとして、画像から抽出した顔を顔認識するために必要な、複数の人それぞれの顔の特徴を示す情報を記憶する。 The storage unit 270 stores information necessary for the detection unit 250 to detect. For example, the storage unit 270 stores, as a face detection database, information indicating human face characteristics necessary for detecting a face from an image. In addition, the storage unit 270 stores, as a face recognition database, information indicating facial features of each of a plurality of persons necessary to recognize a face extracted from an image.
 制御部260は、差分算出部261、輪郭補正制御部262、及び、表示制御部263を備えている。 The control unit 260 includes a difference calculation unit 261, a contour correction control unit 262, and a display control unit 263.
 差分算出部261は、観察者検出部251により検出された観察者の相対位置と、表示装置10に表示された立体画像を立体視可能な視認位置と、の差分(位置の差分)を算出する。例えば、差分算出部261は、検出された観察者の相対位置と、視認位置と、のX、Y、Z軸それぞれにおける座標により示される位置の差分を算出する。 The difference calculation unit 261 calculates a difference (positional difference) between the relative position of the observer detected by the observer detection unit 251 and the visual recognition position where the stereoscopic image displayed on the display device 10 can be stereoscopically viewed. . For example, the difference calculation unit 261 calculates a difference between positions detected by coordinates on the X, Y, and Z axes of the detected relative position of the observer and the visual recognition position.
 輪郭補正制御部262は、撮像部230により撮像された画像に基づいて、観察者1の視認位置から立体視可能な画像を表示できるように表示する画像の輪郭を補正する補正情報を輪郭補正部210に生成させる。例えば、輪郭補正制御部262は、観察者検出部251により検出された観察者1の相対位置に基づいて、表示部11と表示部12とに表示する表示対象を両眼視差により立体表示する所定の位置を示す位置情報を生成する。輪郭補正制御部262は、生成した位置情報を輪郭補正部210に供給するとともに、生成した位置情報を基として輪郭補正部210に、表示装置10に表示する画像の輪郭を補正する補正情報を輪郭補正部210に生成させる。なお、以下の説明において観察者の相対位置のことを、観察者の位置とも記述する場合がある。 The contour correction control unit 262 displays correction information for correcting the contour of an image to be displayed so that a stereoscopically viewable image can be displayed from the viewing position of the viewer 1 based on the image captured by the image capturing unit 230. 210. For example, the contour correction control unit 262 performs a predetermined stereoscopic display of the display target to be displayed on the display unit 11 and the display unit 12 based on the relative position of the observer 1 detected by the observer detection unit 251. The position information indicating the position of is generated. The contour correction control unit 262 supplies the generated position information to the contour correction unit 210, and the correction information for correcting the contour of the image displayed on the display device 10 is contoured in the contour correction unit 210 based on the generated position information. The correction unit 210 generates the data. In the following description, the relative position of the observer may be described as the position of the observer.
 表示制御部263は、表示装置10の表示を制御する。表示制御部263は、入力された画像信号に基づく立体画像を表示装置10に表示させる。 The display control unit 263 controls the display of the display device 10. The display control unit 263 causes the display device 10 to display a stereoscopic image based on the input image signal.
 ここで、図10を参照して、観察者と表示装置と対象画像における輪郭部との位置の関係を説明する。
 図10は、観察者と表示装置と対象画像における輪郭部との位置の関係を示す図である。図10における観察者の位置は、上記のとおり観察者検出部251によって検出された観察者1の位置情報として検出された情報に基づく。
 図10(a)に、表示装置10を表示面側((+Z)軸側)から俯瞰した図を示す。図10(b)に、XZ平面を(+Y)軸側から平面視した図を示す。
 以下の説明では、位置の関係を明示し易くするために、画像P12におけるエッジ部分Eと画像P11におけるエッジ画像PEとの位置関係を簡略化して説明する。例えば、観察者1がいるMaの位置から、画像P12におけるエッジ部分Eが画像P11におけるエッジ画像PEに重なって視認できる場合を想定する。この場合の位置関係を、画像P11と画像P12とが立体視しやすい位置に配置されているものとする。より具体的には、画像P12におけるエッジ部分Eを代表する一点(点POS2)と、同一点(点POS2)に対する画像P11上の一点(点POS1)との位置が合成された画像の上で一致することにより、画像P12とエッジ画像PEの縁が一致している場合に、重なった画像として視認できるものとする。
Here, with reference to FIG. 10, the positional relationship between the observer, the display device, and the contour portion in the target image will be described.
FIG. 10 is a diagram illustrating a positional relationship between an observer, a display device, and a contour portion in a target image. The position of the observer in FIG. 10 is based on information detected as position information of the observer 1 detected by the observer detection unit 251 as described above.
FIG. 10A shows a view of the display device 10 viewed from the display surface side ((+ Z) axis side). FIG. 10B shows a plan view of the XZ plane from the (+ Y) axis side.
In the following description, the positional relationship between the edge portion E in the image P12 and the edge image PE in the image P11 will be described in a simplified manner so that the positional relationship can be clearly shown. For example, it is assumed that the edge portion E in the image P12 can be visually recognized by overlapping the edge image PE in the image P11 from the position of Ma where the observer 1 is present. Assume that the positional relationship in this case is such that the image P11 and the image P12 are arranged at positions where stereoscopic viewing is easy. More specifically, the position of one point (point POS2) representing the edge portion E in the image P12 and the position of one point (point POS1) on the image P11 with respect to the same point (point POS2) match on the synthesized image. By doing so, when the edges of the image P12 and the edge image PE coincide with each other, the images can be visually recognized as overlapping images.
 ここで、この図10に示される各点の位置を説明する。観察者1の位置をZ軸上に表示装置10(表示部12)の表面から+Z軸方向に距離LD隔てた位置とする。また、観察者1の位置をMa(0、0、LD)、点POS2の位置を(X2、Y2、0)、点POS1の位置を(X1、Y1、-ZD)で示す。Ma(0、0、LD)、点POS2、点POS1が一直線に並ぶことにより、点POS2と点POS1とが重なって視認できるようになる。
 このような位置関係にあることにより、Ma(0、0、LD)に位置する観察者1は、画像P11と画像P12とによる立体画像を視認できる。
Here, the position of each point shown in FIG. 10 will be described. The position of the observer 1 is a position that is separated by a distance LD in the + Z-axis direction from the surface of the display device 10 (display unit 12) on the Z-axis. Further, the position of the observer 1 is indicated by Ma (0, 0, LD), the position of the point POS2 is indicated by (X2, Y2, 0), and the position of the point POS1 is indicated by (X1, Y1, -ZD). By aligning Ma (0, 0, LD), point POS2, and point POS1 in a straight line, the point POS2 and the point POS1 can be visually recognized.
Due to such a positional relationship, the observer 1 located at Ma (0, 0, LD) can visually recognize a stereoscopic image formed by the image P11 and the image P12.
 一方、観察者1の位置がMa1(0、0、LD)から、X軸に沿う方向にMb(-ΔX、0、LD)、又は、Mc(+ΔX、0、LD)まで移動した場合、点POS1が点POS2に重ならなくなる。例えば、観察者1の位置がMaからMbまで移動した場合、点POS1の位置が点POS1’に移動する。観察者1の位置がMaからMcまで移動した場合、点POS1の位置が点POS1”に移動する。
 点POS1の位置の移動量が、表示部11における1画素のX軸方向の幅dを単位とする移動量であれば、点POS1の位置を表示部11における1画素単位に移動させることができる。ただし、点POS1の位置の移動量が、表示部11における1画素のX軸方向の幅を単位とする移動量になるとは限らない。エッジ画像PEが表示される位置は、表示部11の画素の位置に依存するため、エッジ画像PEを表示できる位置が、上記のように算出された点POS1の位置に必ずしも対応しない場合が生じる。このように、点POS1が点POS2に重ならなくなることにより、輪郭画像を表示する位置を調整して適切な位置に表示を移動させようとしても、表示部11の解像度の制限で重なるように表示することができない場合が生じる。
On the other hand, when the position of the observer 1 moves from Ma1 (0, 0, LD) to Mb (−ΔX, 0, LD) or Mc (+ ΔX, 0, LD) in the direction along the X axis, POS1 does not overlap point POS2. For example, when the position of the observer 1 moves from Ma to Mb, the position of the point POS1 moves to the point POS1 ′. When the position of the observer 1 moves from Ma to Mc, the position of the point POS1 moves to the point POS1 ″.
If the movement amount of the position of the point POS1 is a movement amount in units of the width d of one pixel in the display unit 11 in the X-axis direction, the position of the point POS1 can be moved in units of one pixel in the display unit 11. . However, the amount of movement of the position of the point POS1 is not necessarily the amount of movement in units of the width of one pixel in the display unit 11 in the X-axis direction. Since the position where the edge image PE is displayed depends on the position of the pixel on the display unit 11, the position where the edge image PE can be displayed may not necessarily correspond to the position of the point POS1 calculated as described above. As described above, since the point POS1 does not overlap the point POS2, even if an attempt is made to move the display to an appropriate position by adjusting the position where the contour image is displayed, the display is performed so as to overlap with the limit of the resolution of the display unit 11. There are cases where it cannot be done.
 そこで、制御部260は、観察者1がMaから移動したことにより点POS1の位置が本来の(X1、Y1、-ZD)から移動した場合、(X1、Y1、-ZD)からの移動量に応じて画像P11の輪郭画像を補正するように輪郭補正部210を制御する。
 なお、図10(b)に示されている位置関係から、三角形の相似関係に基づいた比例演算により、観察者1の移動量に基づいて、表示部11に表示される画像の表示部11の表示面11S上の移動量を算出できる。
Therefore, when the position of the point POS1 moves from the original (X1, Y1, -ZD) due to the observer 1 moving from Ma, the control unit 260 increases the movement amount from (X1, Y1, -ZD). Accordingly, the contour correcting unit 210 is controlled to correct the contour image of the image P11.
Note that, from the positional relationship shown in FIG. 10B, the display unit 11 of the image displayed on the display unit 11 is based on the movement amount of the observer 1 by the proportional calculation based on the similarity relationship of the triangles. The amount of movement on the display surface 11S can be calculated.
 続いて、図11を参照して、表示システム100における輪郭画像の補正の一実施例について説明する。
 図11は、表示システム100における輪郭画像の補正を説明する図である。
 上記図8を参照して説明したように、観察者1の観察位置に依存して、表示装置10に表示する2つの画像のエッジの位置のずれが生じることがある。
 本実施例においては、輪郭画像における輪郭を示す領域の明るさ(輝度)と同じ明るさ(輝度)の領域を広めることにより、上記のずれによる視認性の低下を低減するように、画像処理装置2における輪郭補正部210が輪郭画像を補正する。以下、一実施例を示して、その詳細について説明する。
Next, an example of contour image correction in the display system 100 will be described with reference to FIG.
FIG. 11 is a diagram for explaining the correction of the contour image in the display system 100.
As described with reference to FIG. 8 above, depending on the observation position of the observer 1, the position of the edge of the two images displayed on the display device 10 may be shifted.
In the present embodiment, the image processing apparatus is configured so as to reduce the reduction in the visibility due to the above-described deviation by spreading an area having the same brightness (brightness) as the brightness (brightness) of the area indicating the outline in the outline image. 2 corrects the contour image. Hereinafter, an example will be shown and the details will be described.
 図11に示す各図は、前述の図8の各図にそれぞれ対応する。例えば、図11(a)は、前述の図8(a)と同様に画像P12によって示される四角形Obの右辺エッジ部分E2の画像について示している。
 この図11(a)において上段側から順に並べられた図a1から図a4も、この図8(a)の図a1から図a4と同様である。
Each diagram shown in FIG. 11 corresponds to each diagram in FIG. 8 described above. For example, FIG. 11A shows an image of the right side edge portion E2 of the quadrangle Ob indicated by the image P12 as in FIG. 8A described above.
11A to FIG. 4A arranged in order from the upper side in FIG. 11A are the same as FIG. 8A to FIG. 4A in FIG. 8A.
 一方、観察者1は、立体画像を視認できる所定の位置から移動した場合に、上記図11(a)に示す画像を視認できなくなる。例えば、観察者1がX軸にそって(-X)軸方向に移動した場合を図11(b)に示し、観察者1がX軸にそって(+X)軸方向に移動した場合を図11(c)に示す。図11(b)に示す図b1から図b4と、図11(c)に示す図c1から図c4とは、前述の図a1から図a4にそれぞれ対応する。以下、図11(b)と図11(c)とについて、図11(a)との差分を中心に順に説明する。
 ここで、本実施形態においては、画像P12における四角形Obのエッジ部分E(右辺エッジ部分E2)の幅を、表示部11における画素のX軸方向の幅(d(図10))とする。
On the other hand, when the observer 1 moves from a predetermined position where a stereoscopic image can be visually recognized, the observer 1 cannot visually recognize the image shown in FIG. For example, FIG. 11B shows a case where the observer 1 moves in the (−X) axis direction along the X axis, and FIG. 11B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis. 11 (c). FIGS. B1 to b4 shown in FIG. 11B and FIGS. C1 to c4 shown in FIG. 11C correspond to the above-described FIGS. A1 to a4. Hereinafter, FIG. 11B and FIG. 11C will be described in order centering on the difference from FIG.
Here, in the present embodiment, the width of the edge portion E (right edge portion E2) of the quadrangle Ob in the image P12 is the width of the pixel in the display unit 11 in the X-axis direction (d (FIG. 10)).
 まず、図11(b)における輪郭画像の補正について説明する。
 図b2において、四角形Obの右端ORが、図b1に示すk列目の画素(k列)と、(k+1)列目の画素((k+1)列)との境界より、(+X)軸方向に移動して観察される位置にある。このような位置に四角形Obの右端ORがある場合、四角形Obの右辺エッジ部分E2が、X軸方向に隣り合う列の画素に掛る状況になる。例えば、X軸方向に隣り合う列の画素は、図b1に示すk列目の画素と(k+1)列目の画素として示される。
First, correction of the contour image in FIG. 11B will be described.
In FIG. B2, the right end OR of the rectangle Ob is in the (+ X) axis direction from the boundary between the pixel in the k-th column (column k) and the pixel in the (k + 1) -th column ((k + 1) -th column) shown in FIG. It is in a position to be observed by moving. When the right end OR of the quadrangle Ob is located at such a position, the right-side edge portion E2 of the quadrangle Ob is applied to the pixels in the adjacent column in the X-axis direction. For example, the pixels in the columns adjacent to each other in the X-axis direction are shown as a pixel in the kth column and a pixel in the (k + 1) th column shown in FIG.
 ここで、輪郭補正部210は、画像情報D11Pによって示される右辺エッジ画像PE2から右辺エッジ画像PE2’を生成する。本実施形態における輪郭補正部210は、右辺エッジ画像PE2に応じてX軸方向に隣り合う列の画素を補正するように、右辺エッジ画像PE2に対して並べて右辺エッジ画像PE2’を配置する。この図11(b)においては、輪郭補正部210は、上記k列に位置する右辺エッジ画像PE2に対して、上記(k+1)列に右辺エッジ画像PE2’を並べて配置する。
 また、この図11(b)においては、右辺エッジ画像PE2と右辺エッジ画像PE2’として示す画素の輝度が同じ場合の例が示されている。右辺エッジ画像PE2と右辺エッジ画像PE2’とを示す画素の輝度を調整する場合については後述とする。
Here, the contour correction unit 210 generates a right side edge image PE2 ′ from the right side edge image PE2 indicated by the image information D11P. The contour correction unit 210 according to the present embodiment arranges the right side edge image PE2 ′ side by side with respect to the right side edge image PE2 so as to correct pixels in columns adjacent in the X-axis direction according to the right side edge image PE2. In FIG. 11B, the contour correcting unit 210 arranges the right side edge image PE2 ′ in the (k + 1) column side by side with respect to the right side edge image PE2 located in the k column.
FIG. 11B shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are the same. The case of adjusting the luminance of the pixels indicating the right side edge image PE2 and the right side edge image PE2 ′ will be described later.
 上記のように四角形Obの右端ORが、右辺エッジ画像PE2の右端より(+X)軸方向に移動した位置に表示されていることが検出されたことにより、輪郭補正部210は、右辺エッジ画像PE2’を表示させる。これにより、四角形Obの右辺エッジ部分E2におけるエッジ画像PE2と右辺エッジ画像PE2’とによって補正された画像が合成される(図b3、図b4)。この補正により、エッジ画像PE2と右辺エッジ画像PE2’とが並べて配置されたことにより、四角形Obの右辺エッジ部分E2において、前述の図11(a)における図a4と同様に、輝度が強調された輪郭画像が合成されている(図b4)。また、四角形Obの右端ORが、右辺エッジ画像PE2の右端より(+X)軸方向に移動した場合においても、右辺エッジ画像PE2と、右辺エッジ画像PE2’とがあることにより、四角形Obの右端ORの位置まで輝度のピーク値(Vp)が保持されている輪郭画像が続けて存在する。なお、右辺エッジ画像PE2’は、右辺エッジ画像PE2に連ねて右辺エッジ画像PE2の(+X)軸側に配されている。 As described above, the contour correcting unit 210 detects that the right end OR of the quadrangle Ob is displayed at a position moved in the (+ X) axis direction from the right end of the right side edge image PE2. 'Is displayed. Thereby, the image corrected by the edge image PE2 and the right-side edge image PE2 'in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. B3 and b4). As a result of this correction, the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 11A in FIG. A contour image is synthesized (FIG. B4). Even when the right end OR of the quadrangle Ob is moved in the (+ X) axis direction from the right end of the right side edge image PE2, the right end OR of the quadrangle Ob is obtained by the presence of the right side edge image PE2 and the right side edge image PE2 ′. Contour images in which the luminance peak value (Vp) is held up to the position are continuously present. Note that the right-side edge image PE2 'is arranged on the (+ X) axis side of the right-side edge image PE2 so as to continue to the right-side edge image PE2.
 このように、上記の補正方法によれば、四角形Obの内側に輪郭画像におけるエッジの位置が相対的にずれてしまう場合において、エッジの位置のずれの影響を低減させることができる。 As described above, according to the above correction method, when the edge position in the contour image is relatively displaced inside the rectangle Ob, the influence of the edge position displacement can be reduced.
 次に、図11(c)における輪郭画像の補正について説明する。
 図c2において、四角形Obの右端ORが、図c1に示す右辺エッジ画像PE2を表示するk列目の画素(k列)と(k+1)列目の画素((k+1)列)との境界より、(-X)軸方向に移動して観察される位置にある。このような位置に四角形Obの右端ORがある場合、四角形Obの右辺エッジ部分E2が、図b1に示すk列目の画素と(k-1)列目の画素として示される(-X)軸方向に隣り合う列の画素に掛る状況になる。
Next, the correction of the contour image in FIG.
In FIG. C2, the right end OR of the rectangle Ob is from the boundary between the k-th pixel (k column) and the (k + 1) -th pixel ((k + 1) column) displaying the right-side edge image PE2 shown in FIG. It is in a position to be observed by moving in the (−X) axial direction. When the right end OR of the quadrangle Ob is in such a position, the right edge portion E2 of the quadrangle Ob is shown as the pixel in the k-th column and the pixel in the (k−1) -th column shown in FIG. This is a situation where the pixels on the adjacent columns in the direction are applied.
 ここで、輪郭補正部210は、画像情報D11Pによって示される右辺エッジ画像PE2から右辺エッジ画像PE2’を生成する。本実施形態における輪郭補正部210は、右辺エッジ画像PE2に応じてX軸方向に隣り合う列の画素を補正するように、右辺エッジ画像PE2に対して並べて右辺エッジ画像PE2’を配置する。この図11(c)においては、輪郭補正部210は、上記k列に位置する右辺エッジ画像PE2に対して、上記(k-1)列に右辺エッジ画像PE2’を並べて配置する。
 また、この図11(c)においては、右辺エッジ画像PE2と右辺エッジ画像PE2’として示す画素の輝度が同じ場合の例が示されている。右辺エッジ画像PE2と右辺エッジ画像PE2’とを示す画素の輝度を調整する場合については後述する。
Here, the contour correction unit 210 generates a right side edge image PE2 ′ from the right side edge image PE2 indicated by the image information D11P. The contour correction unit 210 according to the present embodiment arranges the right side edge image PE2 ′ side by side with respect to the right side edge image PE2 so as to correct pixels in columns adjacent in the X-axis direction according to the right side edge image PE2. In FIG. 11C, the contour correcting unit 210 arranges the right side edge image PE2 ′ in the (k−1) column side by side with respect to the right side edge image PE2 located in the k column.
Further, FIG. 11C shows an example in which the luminance of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ is the same. The case of adjusting the luminance of the pixels indicating the right side edge image PE2 and the right side edge image PE2 ′ will be described later.
 上記のように四角形Obの右端ORが、右辺エッジ画像PE2の右端より(-X)軸方向に移動した位置に表示されていることを検出したことにより、輪郭補正部210は、右辺エッジ画像PE2’を表示させる。これにより、四角形Obの右辺エッジ部分E2におけるエッジ画像PE2と右辺エッジ画像PE2’とによって補正された画像が合成される(図c3、図c4)。この補正により、エッジ画像PE2と右辺エッジ画像PE2’とが並べて配置されたことにより、四角形Obの右辺エッジ部分E2において、前述の図11(a)における図a4と同様に、輝度が強調された輪郭画像が合成されている(図c4)。また、四角形Obの右端ORが、右辺エッジ画像PE2の右端より(-X)軸方向に移動した場合においても、右辺エッジ画像PE2と右辺エッジ画像PE2’とがあることにより、四角形Obの右端ORの位置まで輝度のピーク値が保持されている輪郭画像が続けて存在する。なお、右辺エッジ画像PE2’は、右辺エッジ画像PE2に連ねて右辺エッジ画像PE2の(-X)軸側に配されている。 As described above, the contour correcting unit 210 detects that the right end OR of the quadrangle Ob is displayed at a position moved in the (−X) axis direction from the right end of the right side edge image PE2. 'Is displayed. Thereby, the image corrected by the edge image PE2 and the right-side edge image PE2 'in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. C3 and c4). As a result of this correction, the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 11A in FIG. A contour image is synthesized (FIG. C4). Further, even when the right end OR of the quadrangle Ob is moved in the (−X) axis direction from the right end of the right side edge image PE2, the right end OR of the quadrangle Ob is obtained by the presence of the right side edge image PE2 and the right side edge image PE2 ′. There are successive contour images in which the peak value of luminance is held up to the position of. Note that the right side edge image PE2 'is arranged on the (−X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
 このように、上記の補正方法によれば、四角形Obの外側に輪郭画像におけるエッジの位置が相対的にずれてしまう場合において、エッジの位置のずれの影響を低減させることができる。 As described above, according to the correction method described above, when the edge position in the contour image is relatively shifted outside the quadrangle Ob, the influence of the shift in the edge position can be reduced.
 要するに、この図11(b)と(c)に示す合成画像においては、図8(b)と(c)に示す合成画像とそれぞれ対比すると、輪郭画像の輝度のピーク値Vpは何れの場合も同じである。ただし、図11(b)と(c)に示す合成画像においては、四角形Obの右端ORの位置まで続く輪郭画像の輝度のピーク値Vpを示す幅を画素の幅より広く確保できる点が図8(b)と(c)と相違する。要するに、四角形Obの右端OR近傍の合成画像の輝度のピーク値を示す幅を広くすることによって、輪郭画像のエッジとして認識される端部の位置を四角形Obの右端ORに合わせることができる。これにより、表示システム100は、ピーク値Vpによって示される輪郭の幅を画素の幅より広くした輪郭を、右端ORに端部を合わせて示すことができる。
 本実施例によれば、表示システム100は、右辺エッジ画像PE2に右辺エッジ画像PE2’を付加して輪郭画像を強調させることにより、エッジ部の輝度が強調された輪郭画像を合成する。これにより、表示システム100は、観察者1が所定の位置から移動することによって生じたエッジの位置のずれの影響を低減させることができる。
 以上に示すように、本実施形態における表示システム100は、観察者1が立体画像を視認できる所定の位置から移動した場合であっても、立体画像の視認性を向上させることができる。
In short, in the synthesized images shown in FIGS. 11B and 11C, the brightness peak value Vp of the contour image is in any case compared with the synthesized images shown in FIGS. 8B and 8C. The same. However, in the composite image shown in FIGS. 11B and 11C, the width indicating the peak luminance value Vp of the contour image that continues to the position of the right end OR of the quadrangle Ob can be secured wider than the pixel width. It is different from (b) and (c). In short, by widening the width indicating the luminance peak value of the synthesized image in the vicinity of the right end OR of the quadrangle Ob, the position of the end recognized as the edge of the contour image can be matched with the right end OR of the quadrangle Ob. As a result, the display system 100 can show an outline in which the width of the outline indicated by the peak value Vp is wider than the width of the pixel, with the end aligned with the right end OR.
According to the present embodiment, the display system 100 synthesizes the contour image in which the brightness of the edge portion is enhanced by adding the right-side edge image PE2 ′ to the right-side edge image PE2 to enhance the contour image. Thereby, the display system 100 can reduce the influence of the shift of the edge position caused by the observer 1 moving from a predetermined position.
As described above, the display system 100 according to the present embodiment can improve the visibility of a stereoscopic image even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
(本実施形態における変形例)
 図12から図14を参照して、本実施形態の表示システム100における輪郭画像の補正方法の変形例について説明する。
(Modification in this embodiment)
A modification of the contour image correction method in the display system 100 of the present embodiment will be described with reference to FIGS.
(第1の変形例)
 図12は、表示システム100における輪郭画像の補正方法の変形例を説明する図である。
 本変形例においては、輪郭画像の輪郭の領域における明るさ(輝度)より明るさ(輝度)を低減させた領域を、補正前の輪郭の領域に連ねて配することで、上記のずれを許容できるようにする。以下、詳細について説明する。
(First modification)
FIG. 12 is a diagram for explaining a modification of the contour image correction method in the display system 100.
In the present modification, the above-described deviation is allowed by arranging an area in which the brightness (luminance) is reduced from the brightness (luminance) in the outline area of the outline image, continuously with the outline area before correction. It can be so. Details will be described below.
 図12に示す各図は、前述の図8の各図にそれぞれ対応する。例えば、図12(a)は、前述の図8(a)と同様に画像P12によって示される四角形Obの右辺エッジ部分E2の画像について示している。
 この図12(a)において上段側から順に並べられた図a1から図a4も、この図8(a)の図a1から図a4と同様である。
Each diagram shown in FIG. 12 corresponds to each diagram in FIG. 8 described above. For example, FIG. 12A shows an image of the right edge portion E2 of the quadrangle Ob indicated by the image P12 as in the above-described FIG. 8A.
FIGS. A1 to a4 arranged in order from the upper side in FIG. 12A are the same as FIGS. A1 to a4 in FIG. 8A.
 一方、観察者1は、立体画像を視認できる所定の位置から移動した場合に、上記図12(a)に示す画像を視認できなくなる。例えば、観察者1がX軸にそって(-X)軸方向に移動した場合を図12(b)に示し、観察者1がX軸にそって(+X)軸方向に移動した場合を図12(c)に示す。図12(b)に示す図b1から図b4と、図12(c)に示す図c1から図c4とは、前述の図a1から図a4にそれぞれ対応する。以下、図12(b)と図12(c)とについて、図12(a)との差分を中心に順に説明する。
 ここで、本変形例においては、画像P12における四角形Obのエッジ部分E(右辺エッジ部分E2)の幅を、表示部11における画素のX軸方向の幅とする。
On the other hand, when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized, the observer 1 cannot visually recognize the image shown in FIG. For example, FIG. 12B shows a case where the observer 1 moves in the (−X) axis direction along the X axis, and FIG. 12B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis. 12 (c). FIGS. B1 to b4 shown in FIG. 12B and FIGS. C1 to c4 shown in FIG. 12C correspond to the above-described FIGS. A1 to a4, respectively. In the following, FIG. 12B and FIG. 12C will be described in order centering on the difference from FIG.
Here, in this modification, the width of the edge portion E (right edge portion E2) of the quadrangle Ob in the image P12 is set as the width of the pixel in the display unit 11 in the X-axis direction.
 まず、図12(b)における輪郭画像の補正について説明する。図12(b)における輪郭画像の補正を行う位置は、前述の図11(b)における輪郭画像の補正を行う位置と同様の位置に対して行う。要するに輪郭補正部210は、上記k列に位置する右辺エッジ画像PE2に対して、上記(k+1)列に右辺エッジ画像PE2’を並べて配置する。 First, the outline image correction in FIG. 12B will be described. The position where the contour image is corrected in FIG. 12B is performed at the same position as the position where the contour image is corrected in FIG. In short, the contour correcting unit 210 arranges the right-side edge image PE2 'in the (k + 1) column side by side with respect to the right-side edge image PE2 located in the k column.
 ただし、この図12(b)においては、前述の図11(b)と異なり、右辺エッジ画像PE2と右辺エッジ画像PE2’として示す画素の輝度が異なる場合の例が示されている。
 具体的な相違点は、図12(b)の図a1に示すように、(k+1)列目の画素に、画像情報D11Pによって示される右辺エッジ画像PE2から右辺エッジ画像PE2’を生成する際の、右辺エッジ画像PE2’の明るさ(輝度)が異なる。本変形例の場合、右辺エッジ画像PE2’の輝度は、基となる右辺エッジ画像PE2の輝度より低減させた輝度にする。なお、本変形例における輪郭画像の補正は、第1実施形態の一実施例として示した輪郭画像の補正に対して、この点で相違する。
However, FIG. 12B shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are different from those of FIG. 11B described above.
A specific difference is that when the right-side edge image PE2 ′ is generated from the right-side edge image PE2 indicated by the image information D11P for the pixels in the (k + 1) -th column, as shown in FIG. The brightness (luminance) of the right side edge image PE2 ′ is different. In the case of this modification, the luminance of the right side edge image PE2 ′ is set to be lower than the luminance of the right side edge image PE2 as a base. Note that the correction of the contour image in this modification is different in this respect from the correction of the contour image shown as an example of the first embodiment.
 上記のように四角形Obの右端ORが、右辺エッジ画像PE2の右端より(+X)軸方向に移動した位置に表示されていることを検出したことにより、右辺エッジ画像PE2’を表示させる。これにより、四角形Obの右辺エッジ部分E2におけるエッジ画像PE2と右辺エッジ画像PE2’とによって補正された画像が合成される(図b3、図b4)。
 この補正により、エッジ画像PE2と右辺エッジ画像PE2’とが並べて配置されたことにより、四角形Obの右辺エッジ部分E2において、前述の図12(a)における図a4と同様に、輝度が強調された輪郭画像が合成されている(図b4)。また、四角形Obの右端ORが、右辺エッジ画像PE2の右端より(+X)軸方向に移動した場合においても、右辺エッジ画像PE2と右辺エッジ画像PE2’とがあることにより、四角形Obの右端ORの位置まで輪郭画像が続けて存在する。本変形例においては、右辺エッジ画像PE2と右辺エッジ画像PE2’とにおける輪郭画像の輝度のピーク値が異なる。そのため、右辺エッジ画像PE2区間における輪郭画像の輝度のピーク値(Vp)より、右辺エッジ画像PE2’区間における輪郭画像の輝度のピーク値が低くなっている。なお、右辺エッジ画像PE2’は、右辺エッジ画像PE2に連ねて右辺エッジ画像PE2の(+X)軸側に配されている。
As described above, the right edge image PE2 ′ is displayed by detecting that the right edge OR of the quadrangle Ob is displayed at a position moved in the (+ X) axis direction from the right edge of the right edge image PE2. Thereby, the image corrected by the edge image PE2 and the right-side edge image PE2 ′ in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. B3 and b4).
As a result of this correction, the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, so that the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 12A in FIG. A contour image is synthesized (FIG. B4). Further, even when the right end OR of the quadrangle Ob moves in the (+ X) axis direction from the right end of the right side edge image PE2, the right side edge image PE2 and the right side edge image PE2 ′ provide the right edge OR of the quadrangle Ob. Contour images continue to the position. In this modification, the luminance peak values of the contour image in the right side edge image PE2 and the right side edge image PE2 ′ are different. Therefore, the peak brightness value of the contour image in the right edge image PE2 ′ section is lower than the peak brightness value (Vp) of the contour image in the right edge image PE2 section. Note that the right side edge image PE2 ′ is arranged on the (+ X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
 要するに、この図12(b)に示す合成画像においては、図8(b)に示す合成画像と対比すると、輪郭画像の輝度のピーク値Vpを示す幅を画素の幅にした点は図8(b)と同様であるが、輪郭画像が四角形Obの右端ORの位置まで続く点が図8(b)と相違する。要するに、四角形Obの右端ORにおける合成画像の輝度の値を高めることによって、輪郭画像のエッジとして認識される端部の位置を四角形Obの右端ORに合わせ、その位置にピーク値Vpを示す幅を画素の幅にした繊細な輪郭を示すことができる。 In short, in the synthesized image shown in FIG. 12B, in contrast to the synthesized image shown in FIG. 8B, the width indicating the peak value Vp of the brightness of the contour image is changed to the pixel width in FIG. Although it is the same as b), it is different from FIG. 8B in that the contour image continues to the position of the right end OR of the quadrangle Ob. In short, by increasing the brightness value of the composite image at the right end OR of the quadrangle Ob, the position of the end recognized as the edge of the contour image is matched with the right end OR of the quadrangle Ob, and the width indicating the peak value Vp is set at that position. A delicate contour with the width of the pixel can be shown.
 次に、図12(c)における輪郭画像の補正について説明する。図12(c)における輪郭画像の補正を行う位置は、前述の図11(b)における輪郭画像の補正を行う位置と同様の位置に対して行う。要するに輪郭補正部210は、上記k列に位置する右辺エッジ画像PE2に対して、上記(k-1)列に右辺エッジ画像PE2’を並べて配置する。 Next, the correction of the contour image in FIG. The position where the contour image is corrected in FIG. 12C is performed at the same position as the position where the contour image is corrected in FIG. In short, the contour correcting unit 210 arranges the right side edge image PE2 'in the (k-1) column side by side with respect to the right side edge image PE2 located in the k column.
 ただし、この図12(c)においては、前述の図11(c)と異なり、右辺エッジ画像PE2と右辺エッジ画像PE2’として示す画素の輝度が異なる場合の例が示されている。
 具体的な相違点は、図12(c)の図a1に示すように、(k-1)列目の画素に、画像情報D11Pによって示される右辺エッジ画像PE2から右辺エッジ画像PE2’を生成する際の、右辺エッジ画像PE2’の明るさ(輝度)が異なる。本変形例の場合、右辺エッジ画像PE2’の輝度は、基となる右辺エッジ画像PE2の輝度より低減させた輝度にする。なお、第1実施形態の一実施例として示した輪郭画像の補正に対して、この点で相違する。
However, FIG. 12C shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are different from those of FIG. 11C described above.
A specific difference is that, as shown in FIG. A1 in FIG. 12C, the right-side edge image PE2 ′ is generated from the right-side edge image PE2 indicated by the image information D11P at the pixel in the (k−1) -th column. In this case, the brightness (luminance) of the right side edge image PE2 ′ is different. In the case of this modification, the luminance of the right side edge image PE2 ′ is set to be lower than the luminance of the right side edge image PE2 as a base. Note that this is different from the correction of the contour image shown as an example of the first embodiment.
 上記のように四角形Obの右端ORが、右辺エッジ画像PE2の右端より(-X)軸方向に移動した位置に表示されていることを検出したことにより、右辺エッジ画像PE2’を表示させる。これにより、四角形Obの右辺エッジ部分E2におけるエッジ画像PE2と右辺エッジ画像PE2’とによって補正された画像が合成される(図c3、図c4)。
 この補正により、エッジ画像PE2と右辺エッジ画像PE2’とが並べて配置されたことにより、四角形Obの右辺エッジ部分E2において、前述の図12(a)における図a4と同様に、輝度が強調された輪郭画像が合成されている(図c4)。また、四角形Obの右端ORが、右辺エッジ画像PE2の右端より(-X)軸方向に移動した場合においても、右辺エッジ画像PE2と右辺エッジ画像PE2’とがあることにより、四角形Obの右端ORの位置まで輪郭画像が続けて存在する。本変形例においては、エッジ画像PE2と右辺エッジ画像PE2’とにおける輪郭画像の輝度のピーク値が異なる。そのため、エッジ画像PE2区間における輪郭画像の輝度のピーク値より、右辺エッジ画像PE2’区間における輪郭画像の輝度のピーク値が低くなっている。なお、右辺エッジ画像PE2’は、右辺エッジ画像PE2に連ねて右辺エッジ画像PE2の(-X)軸側に配されている。
As described above, by detecting that the right end OR of the quadrangle Ob is displayed at a position moved in the (−X) axis direction from the right end of the right side edge image PE2, the right side edge image PE2 ′ is displayed. Thereby, the image corrected by the edge image PE2 and the right-side edge image PE2 ′ in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. C3 and c4).
As a result of this correction, the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 12A in FIG. A contour image is synthesized (FIG. C4). Further, even when the right end OR of the quadrangle Ob is moved in the (−X) axis direction from the right end of the right side edge image PE2, the right end OR of the quadrangle Ob is obtained by the presence of the right side edge image PE2 and the right side edge image PE2 ′. The contour image continues to the position of. In the present modification, the peak value of the brightness of the contour image is different between the edge image PE2 and the right-side edge image PE2 ′. Therefore, the peak brightness value of the contour image in the right edge image PE2 ′ section is lower than the peak brightness value of the contour image in the edge image PE2 section. Note that the right side edge image PE2 ′ is arranged on the (−X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
 要するに、この図12(c)に示す合成画像においては、図8(c)に示す合成画像と対比すると、輪郭画像の輝度のピーク値Vpを示す幅は図8(c)と同様であるが、更に四角形Obの内側に輪郭画像を補正している点が図8(c)と相違する。要するに、輝度のピーク値Vpの輪郭画像の領域に連ねて、輪郭画像より四角形Obの内側の領域の合成画像の輝度の値を高めることによって、輪郭画像として認識される幅を広くすることができる。
 このように輝度が強調された輪郭画像を合成したことにより、前述の図8と比較して、付加した右辺エッジ画像PE2による輪郭画像を強調させることができる。
In short, in the synthesized image shown in FIG. 12C, the width indicating the luminance peak value Vp of the contour image is the same as that in FIG. 8C when compared with the synthesized image shown in FIG. Further, the point that the contour image is corrected inside the rectangle Ob is different from FIG. In short, the width recognized as the contour image can be widened by increasing the brightness value of the composite image in the region inside the rectangle Ob from the contour image in the region of the contour image of the luminance peak value Vp. .
By synthesizing the contour image in which the luminance is enhanced in this way, the contour image by the added right-side edge image PE2 can be enhanced as compared with FIG. 8 described above.
 以上に示すように、観察者1が立体画像を視認できる所定の位置から移動した場合であっても、立体画像の視認性を向上させることができる。 As described above, even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized, the visibility of the stereoscopic image can be improved.
 (第2の変形例)
 続いて、エッジ画像PEからエッジ画像PE’を生成する際に、エッジ画像PE’の明るさ(輝度)を調整する変形例について説明する。
 本変形例においては、輪郭画像の輪郭より明るさ(輝度)を低減させた領域を、補正前の輪郭の領域に連ねて配し、連ねて配する領域の輝度を上記のずれの大きさに応じて調整することで上記のずれを許容できるようにする。以下、詳細について説明する。
 図13は、本実施形態の表示システム100における輪郭画像の補正方法の変形例として、エッジ画像PE’の明るさ(輝度)の調整について示す図である。
 上段に、観察者1の表示装置10(表示部11、表示部12(図1))にそれぞれ表示される画像P11とP12とに対する相対的な位置関係を示す。中段に表示部11に表示する輪郭画像の明るさ(輝度)を示す。下段に表示装置10に表示された画像が合成された結果を示す。
 観察者1が所定の位置(Ma(図10))から移動した移動量の大きさに応じて、輪郭部の表示が変化する状況を図13(a)から図13(e)の順に並べて示す。
(Second modification)
Next, a modified example of adjusting the brightness (luminance) of the edge image PE ′ when generating the edge image PE ′ from the edge image PE will be described.
In this modification, an area in which the brightness (luminance) is reduced from the outline of the outline image is arranged continuously to the area of the outline before correction, and the luminance of the arranged area is set to the magnitude of the deviation. By adjusting accordingly, the above-described deviation can be allowed. Details will be described below.
FIG. 13 is a diagram illustrating the adjustment of the brightness (luminance) of the edge image PE ′ as a modification of the contour image correction method in the display system 100 of the present embodiment.
The upper part shows the relative positional relationship between the images P11 and P12 displayed on the display device 10 (display unit 11, display unit 12 (FIG. 1)) of the observer 1. The brightness (luminance) of the contour image displayed on the display unit 11 is shown in the middle. The result of combining the images displayed on the display device 10 is shown in the lower part.
The situation in which the display of the contour changes according to the amount of movement that the observer 1 has moved from the predetermined position (Ma (FIG. 10)) is shown in order from FIG. 13 (a) to FIG. 13 (e). .
 図13(a)は、観察者1の位置が所定の位置(Ma(図10))に一致する場合を示し、前述の図11(a)、図12(a)に相当する。 FIG. 13 (a) shows a case where the position of the observer 1 coincides with a predetermined position (Ma (FIG. 10)), and corresponds to the above-described FIG. 11 (a) and FIG. 12 (a).
 図13(b)は、観察者1の位置が所定の位置(Ma(図10))から(+X)軸方向に移動した第1の段階を示し、前述の図12(b)、図12(c)に相当する。即ち、輪郭補正部210は、輝度をエッジ画像PE1(PE2)の輝度より低減したエッジ画像PE1’(PE2’)を生成する。 FIG. 13B shows a first stage in which the position of the observer 1 has moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axial direction. FIG. 13B and FIG. It corresponds to c). That is, the contour correction unit 210 generates an edge image PE1 '(PE2') in which the luminance is lower than the luminance of the edge image PE1 (PE2).
 図13(c)は、観察者1の位置が所定の位置(Ma(図10))から(+X)軸方向にさらに移動した第2の段階を示し、前述の図11(b)、図11(c)に相当する。即ち、輪郭補正部210は、輝度をエッジ画像PE1(PE2)の輝度と同じにしたエッジ画像PE1’(PE2’)を生成する。 FIG. 13 (c) shows a second stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction. It corresponds to (c). That is, the contour correction unit 210 generates an edge image PE1 '(PE2') having the same luminance as that of the edge image PE1 (PE2).
 図13(d)は、観察者1の位置が所定の位置(Ma(図10))から(+X)軸方向にさらに移動した第3の段階を示す。
 上述の第2の段階までは、エッジ画像PE1(PE2)の輝度を初期の値(Vp)に保持したままで、生成したエッジ画像PE1’(PE2’)の輝度を調整していた。この第3段階以降は、エッジ画像PE1’(PE2’)の輝度を初期の値(Vp)に保持したままで、エッジ画像PE1(PE2)の輝度を調整する。
 具体的には輪郭補正部210は、エッジ画像PE1’(PE2’)の輝度を、前述の第2の段階と同じ値(Vp)に保持しつつ、エッジ画像PE1(PE2)の輝度を低減している。
FIG. 13D shows a third stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
Until the second stage described above, the brightness of the generated edge image PE1 ′ (PE2 ′) is adjusted while maintaining the brightness of the edge image PE1 (PE2) at the initial value (Vp). After the third stage, the luminance of the edge image PE1 (PE2) is adjusted while the luminance of the edge image PE1 ′ (PE2 ′) is held at the initial value (Vp).
Specifically, the contour correction unit 210 reduces the luminance of the edge image PE1 (PE2) while maintaining the luminance of the edge image PE1 ′ (PE2 ′) at the same value (Vp) as in the second stage. ing.
 図13(e)は、観察者1の位置が所定の位置(Ma(図10))から(+X)軸方向にさらに移動した第4の段階を示す。この段階では、観察者1の移動量が大きくなり、補正情報として生成したエッジ画像PE1’(PE2’)を、当初の図13(a)においてエッジ画像PE1(PE2)を表示した画素の隣の画素に表示することで、適正な輪郭を表示できる状態に移行していることを示す。この段階では、エッジ画像PE1’(PE2’)が表示され、エッジ画像PE1(PE2)が表示されない状態となる。 FIG. 13 (e) shows a fourth stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction. At this stage, the movement amount of the observer 1 is increased, and the edge image PE1 ′ (PE2 ′) generated as the correction information is displayed next to the pixel that displayed the edge image PE1 (PE2) in FIG. By displaying on a pixel, it shows that it has transfered to the state which can display a proper outline. At this stage, the edge image PE1 '(PE2') is displayed and the edge image PE1 (PE2) is not displayed.
 上記の本変形例においては、エッジ画像PE1(PE2)の輝度を調整する領域と、エッジ画像PE1’(PE2’)の輝度を調整する領域とを、観察者1の移動量に応じて定めている。これにより、簡素化した方法でありながら、観察者1の移動に応じて連続的に輪郭画像の補正量を調整することができる。
 以上に示すように、観察者1が立体画像を視認できる所定の位置から移動した場合であっても、立体画像の視認性を向上できる。
In the above-described modification, a region for adjusting the luminance of the edge image PE1 (PE2) and a region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are determined according to the movement amount of the observer 1. Yes. Thereby, although it is a simplified method, the correction amount of a contour image can be continuously adjusted according to the movement of the observer 1.
As described above, even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized, the visibility of the stereoscopic image can be improved.
(第3の変形例)
 続いて、エッジ画像PEからエッジ画像PE’を生成する際に、エッジ画像PE’の明るさ(輝度)を調整する変形例について説明する。
 本変形例においては、輪郭画像の輪郭より明るさ(輝度)を低減させた領域を、補正前の輪郭に連ねて配し、輪郭の領域と輪郭に連ねて配する領域の輝度とを上記のずれの大きさに応じてそれぞれ調整することで、上記のずれを許容できるようにする。以下、詳細について説明する。
 図14は、本実施形態の表示システム100における輪郭画像の補正方法の変形例として、エッジ画像PE’の明るさ(輝度)の調整について示す図である。
 上段に、観察者1の表示装置10(表示部11、表示部12(図1))にそれぞれ表示される画像P11とP12とに対する相対的な位置関係を示す。中段に表示部11に表示する輪郭画像の明るさ(輝度)を示す。下段に表示装置10に表示された画像が合成された結果を示す。
(Third Modification)
Next, a modified example of adjusting the brightness (luminance) of the edge image PE ′ when generating the edge image PE ′ from the edge image PE will be described.
In this modified example, the area in which the brightness (luminance) is reduced from the outline of the outline image is arranged continuously with the outline before correction, and the luminance of the area arranged along with the outline area and the outline is set as described above. By adjusting each according to the magnitude of the deviation, the above deviation can be allowed. Details will be described below.
FIG. 14 is a diagram illustrating adjustment of the brightness (luminance) of the edge image PE ′ as a modification of the contour image correction method in the display system 100 of the present embodiment.
The upper part shows the relative positional relationship between the images P11 and P12 displayed on the display device 10 (display unit 11, display unit 12 (FIG. 1)) of the observer 1. The brightness (luminance) of the contour image displayed on the display unit 11 is shown in the middle. The result of combining the images displayed on the display device 10 is shown in the lower part.
 前述の第2の変形例においては、エッジ画像PE1(PE2)の輝度を調整する領域と、エッジ画像PE1’(PE2’)の輝度を調整する領域とを、観察者1の移動量に応じて分割する実施態様を例示した。一方、本変形例においては、エッジ画像PE1(PE2)の輝度を調整する領域と、エッジ画像PE1’(PE2’)の輝度を調整する領域とを、観察者1の移動量に応じて分割することなく、観察者1の移動量に応じてそれぞれの輝度を調整する実施態様を以下に示す。 In the second modification described above, the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are set according to the movement amount of the observer 1. The embodiment which divides | segments was illustrated. On the other hand, in the present modification, the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are divided according to the movement amount of the observer 1. An embodiment in which the brightness is adjusted according to the movement amount of the observer 1 without any change will be described below.
 例えば、観察者1の移動量に応じて値が変化する係数k1とk2とを定める。係数k1は、観察者1の移動量が多くなるにつれ、その値が1から0に変化するものとし、係数k2は、観察者1の移動量が多くなるにつれ、その値が0から1に変化するものとする。このように定めた係数k1とk2に従って、エッジ画像PE1(PE2)の輝度の初期値(VS)に係数k1を乗じて、エッジ画像PE1(PE2)の輝度に設定し、エッジ画像PE1(PE2)の輝度の初期値(Vs)に係数k2を乗じて、エッジ画像PE1’(PE2’)の輝度に設定する。係数k1とk2をこのように設定することにより、観察者1の移動量が多くなるにつれ、エッジ画像PE1(PE2)の輝度が徐々に減少し、エッジ画像PE1’(PE2’)の輝度が徐々に上昇するようになる。 For example, coefficients k1 and k2 whose values change according to the movement amount of the observer 1 are determined. The coefficient k1 changes from 1 to 0 as the movement amount of the observer 1 increases, and the coefficient k2 changes from 0 to 1 as the movement amount of the observer 1 increases. It shall be. In accordance with the coefficients k1 and k2 determined in this way, the brightness k of the edge image PE1 (PE2) is multiplied by the coefficient k1 to set the brightness of the edge image PE1 (PE2), thereby setting the edge image PE1 (PE2). Is multiplied by the coefficient k2 to set the luminance of the edge image PE1 ′ (PE2 ′). By setting the coefficients k1 and k2 in this way, the luminance of the edge image PE1 (PE2) gradually decreases and the luminance of the edge image PE1 ′ (PE2 ′) gradually increases as the movement amount of the observer 1 increases. To rise.
 なお、係数k1の値と係数k2の値とを加算して1となるようにしてもよい。このように設定することにより、表示部11における画素の幅に応じて観察者1が移動する範囲を定めることができる。また、観察者1が移動する範囲として定めた範囲の半分に達した場合に、エッジ画像PE1(PE2)の輝度と、エッジ画像PE1’(PE2’)の輝度とを同じ輝度に設定することができる。これにより、設定したその輝度の値を当初のエッジ画像PE1(PE2)の輝度の値の半分の値にできる。 Note that the value of the coefficient k1 and the value of the coefficient k2 may be added to be 1. By setting in this way, the range in which the observer 1 moves can be determined according to the pixel width in the display unit 11. Further, when the half of the range determined as the range in which the observer 1 moves is reached, the luminance of the edge image PE1 (PE2) and the luminance of the edge image PE1 ′ (PE2 ′) may be set to the same luminance. it can. As a result, the set luminance value can be made half the luminance value of the original edge image PE1 (PE2).
 観察者1が所定の位置(Ma(図10))から移動した移動量の大きさに応じて、図14(a)から図14(e)の順に並べて示す。 14 are arranged in the order of FIG. 14 (a) to FIG. 14 (e) in accordance with the amount of movement that the observer 1 has moved from a predetermined position (Ma (FIG. 10)).
 図14(a)は、観察者1の位置が所定の位置(Ma(図10))に一致する場合を示し、前述の図11(a)、図12(a)、図13(a)に相当する。 FIG. 14 (a) shows a case where the position of the observer 1 coincides with a predetermined position (Ma (FIG. 10)), which is shown in FIGS. 11 (a), 12 (a) and 13 (a). Equivalent to.
 図14(b)は、観察者1の位置が所定の位置(Ma(図10))から(+X)軸方向に移動した第1の段階を示す。上記の係数k1とk2とに従えば、0<k2<k1<1の関係がある。
 即ち、エッジ画像PE1(PE2)の輝度の値を、当初のエッジ画像PE1(PE2)の輝度の値(Vs)からV1に低減させる。一方、エッジ画像PE1(PE2)から生成したエッジ画像PE1’(PE2’)の輝度の値を、当初のエッジ画像PE1(PE2)の輝度の値(Vs)から大きく低減したVにする。
FIG. 14B shows a first stage in which the position of the observer 1 has moved from a predetermined position (Ma (FIG. 10)) in the (+ X) axis direction. According to the coefficients k1 and k2, there is a relationship of 0 <k2 <k1 <1.
That is, the luminance value of the edge image PE1 (PE2) is reduced from the initial luminance value (Vs) of the edge image PE1 (PE2) to V1. On the other hand, the luminance value of the edge image PE1 generated from the edge image PE1 (PE2) '(PE2' ), to the initial edge image PE1 (PE2) V 3 which greatly reduces the luminance value (Vs) of.
 図14(c)は、観察者1の位置が所定の位置(Ma(図10))から(+X)軸方向にさらに移動した第2の段階を示す。上記の係数k1とk2とを用いて示すと、k1=k2=0.5の関係がある。即ち、エッジ画像PE1(PE2)と、エッジ画像PE1’(PE2’)との輝度を、当初のエッジ画像PE1(PE2)の値(Vs)の半分の値(V)に低減した同じ値にする。 FIG. 14C shows a second stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction. When expressed using the above-described coefficients k1 and k2, there is a relationship of k1 = k2 = 0.5. That is, the luminance of the edge image PE1 (PE2) and the edge image PE1 ′ (PE2 ′) is reduced to the same value (V 2 ) that is half the value (Vs) of the original edge image PE1 (PE2). To do.
 図14(d)は、観察者1の位置が所定の位置(Ma(図10))から(+X)軸方向にさらに移動した第3の段階を示す。上記の係数k1とk2とに従えば、0<k1<k2<1の関係がある。
 即ち、エッジ画像PE1(PE2)の輝度の値を、当初のエッジ画像PE1(PE2)の輝度の値(Vs)からVに大きく低減させる。一方、エッジ画像PE1(PE2)から生成したエッジ画像PE1’(PE2’)の輝度を、当初のエッジ画像PE1(PE2)の輝度の値(Vs)から低減したVにする。
FIG. 14D shows a third stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction. According to the coefficients k1 and k2, there is a relationship of 0 <k1 <k2 <1.
That is, the value of the luminance of the edge image PE1 (PE2), significantly reduce the brightness values of the original edge image PE1 (PE2) (Vs) to V 3. On the other hand, the brightness of the edge image PE1 edge image PE1 produced from (PE2) '(PE2') , the V 1 was reduced from a luminance value (Vs) of the original edge image PE1 (PE2).
 図14(e)は、観察者1の位置が所定の位置(Ma(図10))から(+X)軸方向にさらに移動した第4の段階を示す。この段階では、観察者1の移動量が大きくなり、補正情報として生成したエッジ画像PE1’(PE2’)を、当初の図13(a)においてエッジ画像PE1(PE2)を表示した画素の隣の画素に表示することで、適正な輪郭を表示できる状態に移行していることを示す。この段階では、エッジ画像PE1’(PE2’)が表示され、エッジ画像PE1(PE2)が表示されない状態となる。 FIG. 14E shows a fourth stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction. At this stage, the movement amount of the observer 1 is increased, and the edge image PE1 ′ (PE2 ′) generated as the correction information is displayed next to the pixel that displayed the edge image PE1 (PE2) in FIG. By displaying on a pixel, it shows that it has transfered to the state which can display a proper outline. At this stage, the edge image PE1 '(PE2') is displayed and the edge image PE1 (PE2) is not displayed.
 上記の本変形例においては、エッジ画像PE1(PE2)の輝度を調整する領域と、エッジ画像PE1’(PE2’)の輝度を調整する領域とを、観察者1の移動量に応じてそれぞれの輝度を調整することにした。これにより、簡素化した方法でありながら、観察者1の移動に応じて連続的に輪郭画像の補正量を調整することができる。さらに、本変形例における輪郭画像の補正方法によれば、観察者1の移動量に応じて、エッジ画像PE1(PE2)とエッジ画像PE1’(PE2’)とにおける輝度を低減している。このように制御していることにより、エッジ画像PE1(PE2)とエッジ画像PE1’(PE2’)を並べることにより輪郭が強調され過ぎることを防ぐことができる。
 以上に示すように、本実施形態に示す表示システム100によれば、観察者1が立体画像を視認できる所定の位置から移動した場合であっても、立体画像の視認性を向上できる。
In the above-described modification, the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are respectively set according to the movement amount of the observer 1. I decided to adjust the brightness. Thereby, although it is a simplified method, the correction amount of a contour image can be continuously adjusted according to the movement of the observer 1. Furthermore, according to the contour image correction method in the present modification, the luminance in the edge image PE1 (PE2) and the edge image PE1 ′ (PE2 ′) is reduced according to the movement amount of the observer 1. By controlling in this way, it is possible to prevent the edge from being excessively emphasized by arranging the edge image PE1 (PE2) and the edge image PE1 ′ (PE2 ′).
As described above, according to the display system 100 shown in the present embodiment, the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
[第2の実施形態]
 続いて、図15A~15Cと図16を参照して、第2の実施形態における表示システム100Aについて説明する。
[Second Embodiment]
Next, a display system 100A according to the second embodiment will be described with reference to FIGS. 15A to 15C and FIG.
 図15A~15Cは、本実施形態における表示システムの概要を示す図である。この図に示される表示システム100Aは、立体視させるような画像を表示部に表示する。
 図15Aに、表示システム100Aにおける表示装置10Aの断面の一部を拡大した模式図を示す。図15Bに、表示装置10Aと観察者1との位置関係を示す。図15Cに、表示部11Aの表示面11Sにおける画素の配置を示す。
 観察者1が図示されている位置から所定の範囲を移動した場合においても、表示装置10Aは、表示する画像を立体視できるように表示する。
15A to 15C are diagrams showing an overview of the display system in the present embodiment. The display system 100A shown in this figure displays an image that allows stereoscopic viewing on the display unit.
FIG. 15A is a schematic diagram in which a part of a cross section of the display device 10A in the display system 100A is enlarged. FIG. 15B shows the positional relationship between the display device 10 </ b> A and the observer 1. FIG. 15C shows the arrangement of pixels on the display surface 11S of the display unit 11A.
Even when the observer 1 moves within a predetermined range from the illustrated position, the display device 10A displays the image to be displayed so that it can be viewed stereoscopically.
 表示装置10Aの表示部11Aと表示部12は、第1実施形態における表示装置10(図1、図2)の表示部11と表示部12とに対応し、表示装置10の場合と同様に、異なる奥行き位置に配置されている。要するに、表示装置10Aにおいては、前述の表示装置10と同様に、表示部11Aに表示させる画像を表示部12に表示させる画像に透過させて表示対象を表示する。表示部11Aと表示部12とが組み合わされた表示装置10Aは、それぞれの表示部に表示した画像が光学的に合成されることにより、立体画像を表示する。要するに、光学的に合成された立体画像は、例えば、前述の図7に示すような左眼Lと右眼Rとに生じる両眼視差が生じた立体画像になる。
 このようにして、表示システム100Aは、表示部12に表示する対象画像のエッジを表示部11Aに表示させることにより、表示装置10Aに表示する像を立体的に表示させる。
The display unit 11A and the display unit 12 of the display device 10A correspond to the display unit 11 and the display unit 12 of the display device 10 (FIGS. 1 and 2) in the first embodiment, and as in the case of the display device 10, Arranged at different depth positions. In short, in the display device 10A, similarly to the display device 10 described above, an image to be displayed on the display unit 11A is transmitted through the image to be displayed on the display unit 12, and a display target is displayed. The display device 10A in which the display unit 11A and the display unit 12 are combined displays a stereoscopic image by optically combining the images displayed on the respective display units. In short, the optically synthesized stereoscopic image becomes a stereoscopic image in which binocular parallax generated in the left eye L and the right eye R as shown in FIG.
In this way, the display system 100A displays the edge of the target image displayed on the display unit 12 on the display unit 11A, thereby displaying the image displayed on the display device 10A in a three-dimensional manner.
 なお、本実施形態の表示部11Aは、所定の視認位置から立体視可能なように表示する立体画像(3D画像)表示が可能な表示ディスプレイを単独で有しており、単独で用いる場合であっても立体画像を表示できる。例えば、特殊な眼鏡などを用いることなく裸眼で立体視可能な立体画像を表示する表示方式としては、両眼視差が生じるような互いに異なる画像をそれぞれ左眼と右眼とに見えるように表示する方式がある。より具体的な方式を例示すれば、表示面の前にレンチキュラーレンズを配置したレンチキュラーレンズ方式や、図26に示すような、表示面の前にパララックスバリア(視差バリア)を配置したパララックスバリア方式などが挙げられる。レンチキュラーレンズ方式は、パララックスバリア方式などの他の方式に比較して同量の発光量で発光させた場合に、眼(左眼(L)及び右眼(R))に届く光量を高め易い。このように、レンチキュラーレンズ方式は、表示部12をさらに透過させて画像を表示する本実施形態の表示部11Aに適した方式であり、表示部11Aにレンチキュラーレンズ方式を適用した場合の一例を図15Aに示す。 Note that the display unit 11A of the present embodiment has a single display that can display a stereoscopic image (3D image) displayed so as to be stereoscopically viewed from a predetermined viewing position. Even a three-dimensional image can be displayed. For example, as a display method for displaying a stereoscopic image that can be stereoscopically viewed with the naked eye without using special glasses, different images that cause binocular parallax are displayed so as to be seen by the left eye and the right eye, respectively. There is a method. To give more specific examples, a lenticular lens system in which a lenticular lens is arranged in front of the display surface, or a parallax barrier in which a parallax barrier (parallax barrier) is arranged in front of the display surface as shown in FIG. Examples include methods. The lenticular lens method can easily increase the amount of light reaching the eyes (left eye (L) and right eye (R)) when emitting light with the same amount of light emission as compared to other methods such as a parallax barrier method. . As described above, the lenticular lens method is a method suitable for the display unit 11A of the present embodiment for further displaying the image through the display unit 12, and an example in which the lenticular lens method is applied to the display unit 11A. Shown in 15A.
 以下、本実施形態における表示システム100Aの一実施例として、レンチキュラーレンズ方式の表示部11Aを備える場合を例示して説明する。 Hereinafter, as an example of the display system 100A in the present embodiment, a case where the display unit 11A of the lenticular lens type is provided will be described as an example.
 表示部11Aの表示面11Sには、シート状のレンチキュラーレンズ13が設けられている。レンチキュラーレンズ13は、一方向に曲率を有し、一方向と直交する方向に曲率を有していない複数の凸レンズ(例えば、シリンドリカルレンズ)が延伸方向と直交する方向に並べて設けられている。ここでは、凸レンズの延伸方向を上下方向(Y軸方向)とする。
 表示部11Aの表示面11Sには、レンチキュラーレンズ13における各凸レンズに対応させて、同凸レンズの延伸方向(Y軸方向)に沿って複数の矩形状の表示領域が設けられている。例えば、本実施形態における表示部11Aの一実施例として、上記の複数の表示領域には右眼用の画像を表示する複数の表示領域R1、R2、R3、R4、R5と、左眼用の画像を表示する複数の表示領域L1、L2、L3、L4、L5とが、交互に配されるように割り付けられている。凸レンズに対応して対となる右眼用の画像(例えば、表示領域R1に表示する画像)と左眼用の画像(例えば、表示領域L1に表示する画像)を観察できる範囲に観察者1が配置する場合に、表示部11Aの表示を立体画像として観察できる。
 また、図15Cに示すように、表示部11Aの表示面11Sには、各表示領域が表示領域L1、R1、L2,R2、L3,R3、L4,R4、L5、R5、・・・の順に配されている。各表示領域には、各表示領域の延伸方向(Y軸方向)に複数の画素が並べて配されている。例えば、表示領域L1、R1、L2において、X軸方向に沿って画素PICL1、画素PICR1、画素PICL2が順に設けられている。例えば、この画素PICL1と画素PICR1との対が、観察者1に立体画像を視認させる対となる。また各画素における明るさ(輝度)は、均一の明るさ(輝度)で表示される。なお、表示部11Aにおける各画素は、それぞれ単一の画素として扱うことができるように構成されているものであればよく、各画素がさらに複数のサブ画素を備えるものであってもよい。例えば、表示部11Aがカラー画像を表示可能とするものであれば、各画素が備えるサブ画素が3つの色(RGB)に応じて設けられているものであってもよい。
 なお、このような構成を備える表示部11Aは、表示する立体画像を観察するのに最も適した位置が、図15Bに示す観察者1の位置のように離散的に配置される。表示する立体画像を観察するのに最も適した位置を基準にした所定の範囲が、立体画像を観察しやすい領域となる。
A sheet-like lenticular lens 13 is provided on the display surface 11S of the display unit 11A. The lenticular lens 13 is provided with a plurality of convex lenses (for example, cylindrical lenses) that have a curvature in one direction and do not have a curvature in a direction orthogonal to the one direction in a direction orthogonal to the extending direction. Here, the extending direction of the convex lens is the vertical direction (Y-axis direction).
On the display surface 11S of the display unit 11A, a plurality of rectangular display areas are provided along the extending direction (Y-axis direction) of the convex lens so as to correspond to the convex lenses in the lenticular lens 13. For example, as an example of the display unit 11A in the present embodiment, a plurality of display regions R1, R2, R3, R4, and R5 that display images for the right eye are displayed on the plurality of display regions, and a display for the left eye. A plurality of display areas L1, L2, L3, L4, and L5 for displaying images are assigned so as to be alternately arranged. The observer 1 is within a range in which a pair of right-eye images (for example, images displayed in the display area R1) and left-eye images (for example, images displayed in the display area L1) can be observed corresponding to the convex lens. When arranged, the display on the display unit 11A can be observed as a stereoscopic image.
As shown in FIG. 15C, the display area 11S of the display unit 11A has display areas in the order of display areas L1, R1, L2, R2, L3, R3, L4, R4, L5, R5,. It is arranged. In each display area, a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each display area. For example, in the display areas L1, R1, and L2, the pixel PICL1, the pixel PICR1, and the pixel PICL2 are provided in order along the X-axis direction. For example, the pair of the pixel PICL1 and the pixel PICR1 is a pair that allows the observer 1 to visually recognize a stereoscopic image. Further, the brightness (luminance) in each pixel is displayed with uniform brightness (luminance). Each pixel in the display unit 11A may be any pixel as long as it can be handled as a single pixel, and each pixel may further include a plurality of sub-pixels. For example, as long as the display unit 11A can display a color image, the sub-pixels included in each pixel may be provided according to three colors (RGB).
In the display unit 11A having such a configuration, the most suitable position for observing the stereoscopic image to be displayed is discretely arranged like the position of the observer 1 illustrated in FIG. 15B. A predetermined range based on a position most suitable for observing the stereoscopic image to be displayed is an area where the stereoscopic image is easily observed.
 続いて、図16を参照して、本実施形態における表示システム100Aの構成について説明する。
 図16は、本発明の一実施形態による表示システム100Aの構成を示す概略ブロック図である。図16に示す表示システム100Aは、画像処理装置2Aと表示装置10Aとを備える。
 本実施形態における表示装置10Aは、表示部11Aと表示部12とを備える。
 画像処理装置2Aは、表示装置10Aに立体画像を表示させるための輪郭画像を生成する。
 画像処理装置2Aは、輪郭補正部210A、撮像部230、検出部250、制御部260、及び、記憶部270を備えている。
Next, the configuration of the display system 100A in the present embodiment will be described with reference to FIG.
FIG. 16 is a schematic block diagram showing the configuration of a display system 100A according to an embodiment of the present invention. A display system 100A illustrated in FIG. 16 includes an image processing device 2A and a display device 10A.
The display device 10A in the present embodiment includes a display unit 11A and a display unit 12.
The image processing device 2A generates a contour image for displaying a stereoscopic image on the display device 10A.
The image processing apparatus 2A includes a contour correction unit 210A, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
 本実施形態における輪郭補正部210Aは、供給される画像情報(画像情報D11P、画像情報D12P)に応じて、供給される画像情報のうち少なくとも何れかの画像情報を補正して出力する。輪郭補正部210Aは、例えば、画像情報D11Pを補正した画像情報D11を表示装置10Aに供給する。
 本実施形態における輪郭補正部210Aは、判定部213と、補正部211Aとを備える。補正部211Aは、前述の補正部211(図9)に対応する。
 本実施形態の補正部211Aは、左眼(L)と右眼(R)とに対してそれぞれ表示する画像の画像情報D11を画像情報D11Pに基づいてそれぞれ生成する。本実施形態における以下の説明において、表示部11Aに、左眼(L)と右眼(R)とに対してそれぞれ表示させる画像の画像情報D11を、画像情報D11Lと画像情報D11Rという。
In accordance with the supplied image information (image information D11P, image information D12P), the contour correcting unit 210A in the present embodiment corrects and outputs at least one of the supplied image information. The contour correction unit 210A supplies, for example, image information D11 obtained by correcting the image information D11P to the display device 10A.
The contour correction unit 210A in the present embodiment includes a determination unit 213 and a correction unit 211A. The correction unit 211A corresponds to the above-described correction unit 211 (FIG. 9).
The correction unit 211A of the present embodiment generates image information D11 of an image to be displayed for each of the left eye (L) and the right eye (R) based on the image information D11P. In the following description of the present embodiment, image information D11 of an image displayed on the display unit 11A for the left eye (L) and the right eye (R), respectively, is referred to as image information D11L and image information D11R.
 輪郭補正部210Aは、画像情報D11Lと画像情報D11Rとの生成にあたり、前述の第1の実施形態における実施例並びに各変形例に示した輪郭画像の補正方法を適用して、画像情報D12Pに対応する輪郭画像として生成する。なお、前述の第1の実施形態において表示部11の画素を、本実施形態における表示部11Aの画素、或いは、表示部11Aとして離散的に配置される矩形状の表示領域として読み替える。
 前述の第1の実施形態における輪郭画像の生成において、検出した観察者1の位置を基準にして輪郭画像を補正する場合を例示した。これに対して本実施形態の輪郭補正部210Aは、観察者1の位置に代え、検出した観察者1の位置から推定される観察者1の左眼(L)の位置と右眼(R)の位置に基づいて輪郭画像を補正する。或いは、輪郭補正部210Aは、観察者1の位置に代え、検出した観察者1の左眼(L)の位置と右眼(R)の位置に基づいて輪郭画像を補正する。
In generating the image information D11L and the image information D11R, the contour correction unit 210A applies the contour image correction method described in the example and the modifications in the first embodiment described above, and corresponds to the image information D12P. Generated as a contour image. In the first embodiment described above, the pixel of the display unit 11 is read as a pixel of the display unit 11A in the present embodiment or a rectangular display region discretely arranged as the display unit 11A.
In the generation of the contour image in the first embodiment described above, the case where the contour image is corrected based on the detected position of the observer 1 is exemplified. On the other hand, the contour correcting unit 210A of the present embodiment replaces the position of the observer 1 and the position of the left eye (L) and the right eye (R) of the observer 1 estimated from the detected position of the observer 1. The contour image is corrected based on the position of. Alternatively, the contour correcting unit 210A corrects the contour image based on the detected position of the left eye (L) and right eye (R) of the viewer 1 instead of the position of the viewer 1.
 要するに、輪郭補正部210Aは、観察者1の左眼(L)の位置を基準にして画像情報D11Pから画像情報D11Lを生成して、表示部11Aに左眼(L)に対して表示する画像として表示させる。これにより、左眼(L)において観察される画像は、画像情報D11Lに基づいた画像と画像情報D12とに基づいた画像とが光学的に合成された画像になる。また、輪郭補正部210Aは、観察者1の右眼(R)の位置を基準にして画像情報D11Pから画像情報D11Rを生成して、表示部11Aに右眼(R)に対して表示する画像として表示させる。これにより、右眼(R)において観察される画像は、画像情報D11Rに基づいた画像と画像情報D12とに基づいた画像とが光学的に合成された画像になる。
 本実施形態においては、上記のように左眼(L)と右眼(R)とに対して表示する画像をそれぞれ独立に生成しているが、それぞれ光学的に合成された画像によって結像された結果は、前述の図5に示すようになる。
 観察者1の右眼Rにおいては、右眼Rに視認される画像P11Rと、右眼Rに視認される画像P12Rとが合成された光学像IMRが結像する。
 本実施形態の表示システム100Aにおいては、左眼(L)と右眼(R)とに対して表示する画像がそれぞれ独立に生成されており、左眼(L)と右眼(R)とに対して立体視しやすい画像を表示する。これにより、観察者1は、より立体感のある立体画像を観察することができる。
 また、右眼Rに視認される画像P11Rと、右眼Rに視認される画像P12Rとの生成において、第1の実施形態に示したように輪郭画像を補正することにより、表示部11Aにおいて2つの画素に掛る輪郭画像を表示する場合に、その影響を低減させることができる。
In short, the contour correction unit 210A generates image information D11L from the image information D11P with reference to the position of the left eye (L) of the observer 1, and displays the image on the display unit 11A for the left eye (L). Display as. Thereby, the image observed in the left eye (L) is an image obtained by optically combining the image based on the image information D11L and the image based on the image information D12. The contour correction unit 210A generates image information D11R from the image information D11P with reference to the position of the right eye (R) of the observer 1 and displays the image information D11R for the right eye (R) on the display unit 11A. Display as. Thereby, the image observed in the right eye (R) is an image obtained by optically combining the image based on the image information D11R and the image based on the image information D12.
In the present embodiment, as described above, the images to be displayed for the left eye (L) and the right eye (R) are generated independently. However, the images are formed by optically synthesized images. The result is as shown in FIG.
In the right eye R of the observer 1, an optical image IMR formed by combining the image P11R visually recognized by the right eye R and the image P12R visually recognized by the right eye R is formed.
In the display system 100A of the present embodiment, the images to be displayed for the left eye (L) and the right eye (R) are generated independently, and the left eye (L) and the right eye (R) On the other hand, an image that can be easily viewed stereoscopically is displayed. Thereby, the observer 1 can observe a stereoscopic image with a more stereoscopic effect.
Further, in the generation of the image P11R visually recognized by the right eye R and the image P12R visually recognized by the right eye R, the contour image is corrected as shown in the first embodiment, so that 2 is displayed on the display unit 11A. When displaying a contour image of one pixel, the influence can be reduced.
 以上、本実施形態に示す表示システム100Aによれば、表示部11A(第1表示部)と表示部12(第2表示部)とに表示する表示対象を両眼視差により所定の位置に立体表示する画像情報のうち表示部11Aに表示させる画像情報を画像情報D11(第1の画像情報)とする。
 輪郭補正部210Aは、上記所定の位置と、表示部11Aが有する2次元に配列された複数の画素の位置と、上記表示対象の輪郭部に対応する表示部11Aにおける画素位置と、に基づいて、画像情報D11のうち表示対象の輪郭部に対応する画像情報を補正する。上記所定の位置を、例えば観察者1の両眼のそれぞれの位置にする。なお、上記の所定の位置を観察者1の両眼のそれぞれの位置とせずに、前述の第1の実施形態のように、観察者1を代表する単一の位置としてもよい。
As described above, according to the display system 100A shown in the present embodiment, the display target to be displayed on the display unit 11A (first display unit) and the display unit 12 (second display unit) is stereoscopically displayed at a predetermined position by binocular parallax. Of the image information to be displayed, image information to be displayed on the display unit 11A is referred to as image information D11 (first image information).
The contour correction unit 210A is based on the predetermined position, the positions of a plurality of pixels arranged in a two-dimensional manner in the display unit 11A, and the pixel position in the display unit 11A corresponding to the contour part to be displayed. In the image information D11, the image information corresponding to the contour portion to be displayed is corrected. The predetermined position is set to, for example, each position of both eyes of the observer 1. In addition, it is good also as a single position which represents the observer 1 like the above-mentioned 1st Embodiment, without making said predetermined position into each position of both eyes of the observer 1. FIG.
 (表示部11Aの変形例)
 なお、以上の説明における表示部11Aについて2眼型のレンチキュラー方式を適用した場合を例示して説明したが、表示部11Aに多眼型のレンチキュラー方式を適用することができる。
 表示部11Aに多眼型のレンチキュラー方式を適用した場合には、表示装置10Aを見込む複数の方向から立体画像を観察することができる。表示部11Aは、表示部11Aの正面にあたる位置、また、正面から外れた位置からも立体画像を観察可能なように表示する。この場合、表示部11Aに対して正面にあたる位置から観察者1が移動しても、それらの方向から観察するのに適した立体画像を連続的に観察することができる輪郭画像を表示部11Aに表示するとよい。
(Modification of display unit 11A)
In addition, although the case where the two-lens type lenticular method is applied to the display unit 11A in the above description is described as an example, a multi-lens type lenticular method can be applied to the display unit 11A.
When the multi-lens lenticular method is applied to the display unit 11A, a stereoscopic image can be observed from a plurality of directions in which the display device 10A is viewed. The display unit 11A displays the stereoscopic image so that the stereoscopic image can be observed from a position corresponding to the front of the display unit 11A and a position deviated from the front. In this case, even if the observer 1 moves from a position in front of the display unit 11A, a contour image that can continuously observe a stereoscopic image suitable for observation from those directions is displayed on the display unit 11A. It is good to display.
 そこで、表示部11Aの正面から外れた方向から観察する場合に、輪郭補正部210Aは、立体画像が観察可能な方向に適した立体画像を表示する輪郭画像を表示部11Aに表示させるようにする。
 例えば、輪郭補正部210Aは、供給される画像情報(画像情報D11P、画像情報D12P)に基づいて、立体画像が観察可能な方向に適した立体画像を表示する画像情報D11を、それぞれの方向に対応する輪郭画像となるように生成して出力する。輪郭補正部210Aは、画像情報D11の生成にあたり、例えば、第1の実施形態に示したような輪郭画像の補正方法を適用して、レンチキュラーレンズ13によって定められる立体画像が観察可能な方向に適した立体画像を表示させる輪郭画像を出力する。
 例えば、輪郭補正部210Aは、第1の実施形態に示したような輪郭画像の輪郭画像の補正方法で、立体画像が観察可能な方向に適した立体画像を表示するような輪郭画像を出力してもよい。また、輪郭補正部210Aは、表示部11Aの正面から観察した場合に輪郭を示す画素を基準にして、その輪郭を示す画素に隣接する画素に表示する画像の輝度を、正面から観察した場合の輪郭を示す画素の輝度に応じて調整する。これにより、それぞれの方向から観察された場合の輪郭画像が、輪郭補正部210Aによって生成され表示部11Aに表示することが可能になる。
Therefore, when observing from a direction deviating from the front of the display unit 11A, the contour correcting unit 210A causes the display unit 11A to display a contour image that displays a stereoscopic image suitable for the direction in which the stereoscopic image can be observed. .
For example, the contour correction unit 210A displays, in each direction, image information D11 that displays a stereoscopic image suitable for the direction in which the stereoscopic image can be observed based on the supplied image information (image information D11P, image information D12P). Generate and output a corresponding contour image. In generating the image information D11, the contour correction unit 210A applies, for example, the contour image correction method as described in the first embodiment, and is suitable for the direction in which the stereoscopic image determined by the lenticular lens 13 can be observed. A contour image for displaying the stereoscopic image is output.
For example, the contour correction unit 210A outputs a contour image that displays a three-dimensional image suitable for the direction in which the three-dimensional image is observable by the contour image correction method as described in the first embodiment. May be. In addition, the contour correction unit 210A uses the pixel indicating the contour when viewed from the front of the display unit 11A as a reference, and the luminance of the image displayed on the pixel adjacent to the pixel indicating the contour when viewed from the front. It adjusts according to the brightness | luminance of the pixel which shows an outline. As a result, a contour image observed from each direction can be generated by the contour correcting unit 210A and displayed on the display unit 11A.
 このように、表示部11Aの正面から外れたそれぞれの方向からの輪郭画像を表示部11Aにそれぞれ表示させることにより、観察者1の移動に応じて変化する立体画像を表示することが可能になる。さらに、表示する輪郭画像を観察者1の移動に応じて連続的に変化するような輪郭画像を輪郭補正部210Aが生成することにより、表示部11Aに表示される立体画像を観察者1の移動に応じて連続的に徐々に変化させることができ、移動に応じて立体画像をなぞるように変化させて表示させることができる。
 このように、多眼型のレンチキュラー方式を適用した場合においても、画像情報D11(第1の画像情報)によって表示する表示対象の輪郭部の位置が、観察者1の位置に応じて調整されることにより、表示システム1は、観察者1Aの位置から表示対象を立体視できる輪郭画像を、表示部11Aの立体視可能な方向に応じて表示させることができる。これにより、観察者1から立体画像を観察できる範囲を広めることができる。
 以上に示すように、本実施形態に示す表示システム100Aによれば、観察者1が立体画像を視認できる所定の位置から移動した場合であっても、立体画像の視認性を向上できる。
In this way, by displaying the contour images from the respective directions deviating from the front of the display unit 11A on the display unit 11A, it is possible to display a stereoscopic image that changes according to the movement of the observer 1. . Furthermore, the contour correcting unit 210A generates a contour image that continuously changes the contour image to be displayed in accordance with the movement of the observer 1, whereby the stereoscopic image displayed on the display unit 11A is moved by the observer 1. Can be changed gradually and continuously according to the movement, and the stereoscopic image can be changed and displayed in accordance with the movement.
As described above, even when the multi-lens type lenticular method is applied, the position of the contour portion of the display target displayed by the image information D11 (first image information) is adjusted according to the position of the observer 1. Thus, the display system 1 can display a contour image that allows the display target to be stereoscopically viewed from the position of the observer 1A according to the stereoscopically visible direction of the display unit 11A. Thereby, the range which can observe a stereo image from the observer 1 can be expanded.
As described above, according to the display system 100A shown in the present embodiment, the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
[第3の実施形態]
 続いて、図17Aから図21を参照して、第3の実施形態における表示システム100Bについて説明する。
[Third Embodiment]
Subsequently, a display system 100B according to the third embodiment will be described with reference to FIGS. 17A to 21.
 図17A~17Cは、本実施形態における表示システムの概要を示す図である。この図に示される表示システム100Bは、立体視させるような画像を表示部に表示する。
 図17Aに、表示システム100Bにおける表示装置10Bの断面の一部を拡大して、立体視可能な範囲に観察者1が位置している状態を示す。図17Bに、表示装置10Bと観察者1との位置関係を示す。図17Cに、表示装置10Bの表示面10Sにおける画素の配置を示す。観察者1は、図示されているから位置から所定の範囲を移動しても、表示装置10Bが表示する画像を立体視できる。
17A to 17C are diagrams showing an outline of the display system in the present embodiment. The display system 100B shown in this figure displays an image that allows stereoscopic viewing on the display unit.
FIG. 17A shows a state where the viewer 1 is located in a stereoscopically viewable range by enlarging a part of the cross section of the display device 10B in the display system 100B. FIG. 17B shows the positional relationship between the display device 10 </ b> B and the observer 1. FIG. 17C shows an arrangement of pixels on the display surface 10S of the display device 10B. Since the viewer 1 is illustrated, the viewer can stereoscopically view the image displayed on the display device 10B even if the viewer 1 moves a predetermined range from the position.
 本実施形態における表示装置10Bは、他の表示装置と組み合わせることなく単独で用いた場合であっても、立体画像を所定の視認位置から立体視可能なように表示する立体画像(3D画像)表示が可能な表示ディスプレイを有するものを例示して説明する。
 この図17Aに示される表示装置10Bの表示面10S上には、シート状のレンチキュラーレンズ13が設けられている。レンチキュラーレンズ13には一方向に曲率を有し、一方向と直交する方向に曲率を有していない複数の凸レンズ(例えば、シリンドリカルレンズ)が延伸方向をそろえて、延伸方向と直交する方向に並べて設けられている。ここでは、凸レンズの延伸方向を上下方向(Y軸方向)とする。
The display device 10B according to the present embodiment displays a stereoscopic image (3D image) so that a stereoscopic image can be stereoscopically viewed from a predetermined viewing position even when used alone without being combined with another display device. An example having a display that can be used will be described.
A sheet-like lenticular lens 13 is provided on the display surface 10S of the display device 10B shown in FIG. 17A. The lenticular lens 13 has a curvature in one direction, and a plurality of convex lenses (for example, cylindrical lenses) that do not have a curvature in a direction orthogonal to the one direction are aligned in the direction orthogonal to the extension direction. Is provided. Here, the extending direction of the convex lens is the vertical direction (Y-axis direction).
 表示装置10Bの表示面10Sには、レンチキュラーレンズ13における各凸レンズに対応させて、同凸レンズの延伸方向(Y軸方向)に沿って複数の矩形状の表示領域が設けられている。例えば、本実施形態における表示装置10Bの一実施例として、上記の複数の表示領域には右眼用の画像を表示する複数の表示領域R1、R2、R3、R4、R5と、左眼用の画像を表示する複数の表示領域L1、L2、L3、L4、L5とが割り付けられている。 The display surface 10S of the display device 10B is provided with a plurality of rectangular display areas corresponding to the convex lenses in the lenticular lens 13 along the extending direction (Y-axis direction) of the convex lenses. For example, as an example of the display device 10B according to the present embodiment, a plurality of display regions R1, R2, R3, R4, and R5 that display images for the right eye are displayed on the plurality of display regions, and a display for the left eye. A plurality of display areas L1, L2, L3, L4, and L5 for displaying images are allocated.
 このように、表示装置10Bは、レンチキュラーレンズ13によって生成される視差を利用して立体画像を表示する。本実施形態の表示装置10Bは、例えば、レンチキュラーレンズ方式の表示ディスプレイ(表示部11Bと表示部12B)を備える。
 表示部11Bは、両眼のうち一方の眼に提示する画像を表示する表示領域を複数の表示領域に分散して設けられる。表示部12Bについても表示部11Bと同様に、両眼のうち他方の眼に提示する画像を表示する表示領域を複数の表示領域に分散して設けられる。この図17Aの場合、例えば、表示部11Bは、表示領域R1、R2、R3、R4、R5(第1の表示領域)に設けられ、表示部12Bは、表示領域L1、L2、L3、L4、L5(第2の表示領域)に設けられる。このように、表示装置10Bにおける表示部11Bと表示部12Bとの配置が第1実施形態における表示装置10(図1、図2)と異なり、同一の面(表示面10S)に沿って配置される。
Thus, the display device 10B displays a stereoscopic image using the parallax generated by the lenticular lens 13. The display device 10B of the present embodiment includes, for example, a lenticular lens display (display unit 11B and display unit 12B).
The display unit 11 </ b> B is provided with a display area for displaying an image to be presented to one of the eyes dispersed in a plurality of display areas. Similarly to the display unit 11B, the display unit 12B is also provided with a display area for displaying an image to be presented to the other eye of both eyes dispersed in a plurality of display areas. In the case of FIG. 17A, for example, the display unit 11B is provided in the display regions R1, R2, R3, R4, and R5 (first display region), and the display unit 12B includes the display regions L1, L2, L3, L4, L5 (second display area) is provided. In this way, the arrangement of the display unit 11B and the display unit 12B in the display device 10B is different from the display device 10 (FIGS. 1 and 2) in the first embodiment and is arranged along the same surface (display surface 10S). The
 また、図17Cに示すように、表示装置10Bの表示面10Sには、各表示領域が表示領域L1、R1、L2,R2、L3,R3、L4,R4、L5、R5、・・・の順に配されている。各表示領域には、各表示領域の延伸方向(Y軸方向)に複数の画素が並べて配されている。例えば、表示領域L1、R1、L2において、X軸方向に沿って画素PICL1、画素PICR1、画素PICL2が順に設けられている。この画素PICL1と画素PICR1との対が、観察者1に立体画像を視認させる対となる。また各画素における明るさ(輝度)は、均一の明るさ(輝度)で表示される。なお、表示部11Aにおける各画素は、それぞれ単一の画素として扱うことができるように構成されているものであればよく、各画素がさらに複数のサブ画素を備えるものであってもよい。例えば、表示部11B、表示部12Bがカラー画像を表示可能とするものであれば、各画素が備えるサブ画素が3つの色(RGB)に応じて設けられているものであってもよい。 Further, as shown in FIG. 17C, the display areas on the display surface 10S of the display device 10B are display areas L1, R1, L2, R2, L3, R3, L4, R4, L5, R5,. It is arranged. In each display area, a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each display area. For example, in the display areas L1, R1, and L2, the pixel PICL1, the pixel PICR1, and the pixel PICL2 are provided in order along the X-axis direction. The pair of the pixel PICL1 and the pixel PICR1 is a pair that allows the observer 1 to visually recognize a stereoscopic image. Further, the brightness (luminance) in each pixel is displayed with uniform brightness (luminance). Each pixel in the display unit 11A may be any pixel as long as it can be handled as a single pixel, and each pixel may further include a plurality of sub-pixels. For example, as long as the display unit 11B and the display unit 12B can display a color image, the sub pixels included in each pixel may be provided according to three colors (RGB).
 図18を参照して、本実施形態における表示システム100Bの構成について説明する。
 以下、本実施形態における表示システム100Bの一実施例として、表示装置に2眼型レンチキュラーレンズ方式を適用する場合を例示して説明する。
 図18は、本発明の一実施形態による表示システム100Bの構成を示す概略ブロック図である。図18に示す表示システム100Bは、画像処理装置2Bと表示装置10Bとを備える。
 画像処理装置2Bは、表示装置10Bに立体画像を表示させるための輪郭画像を生成する。
 画像処理装置2Bは、輪郭補正部210B、立体画像生成部220B、撮像部230、検出部250、制御部260、及び、記憶部270を備えている。
With reference to FIG. 18, the structure of the display system 100B in this embodiment is demonstrated.
Hereinafter, as an example of the display system 100B in the present embodiment, a case where a two-lens lenticular lens system is applied to a display device will be described as an example.
FIG. 18 is a schematic block diagram showing the configuration of a display system 100B according to an embodiment of the present invention. A display system 100B illustrated in FIG. 18 includes an image processing device 2B and a display device 10B.
The image processing device 2B generates a contour image for displaying a stereoscopic image on the display device 10B.
The image processing device 2B includes a contour correction unit 210B, a stereoscopic image generation unit 220B, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
 本実施形態における立体画像生成部220Bは、表示部11Bと表示部12Bとに表示する表示対象を両眼視差により、所定の位置から表示対象が立体視できるような画像情報D11Pと画像情報D12Pとを生成する。上記の所定の位置は、観察者1の視認位置が輪郭を最も強調できる位置であり、観察者1が立体画像を視認できる位置である。これにより、立体画像生成部220Bは、画像情報D11Pによって表示する表示対象の輪郭部の位置を上記の所定の位置に応じて調整した画像情報D11Pと画像情報D12Pを生成する。
 例えば、立体画像生成部220Bは、表示部11Bと表示部12Bとに表示するための画像情報D11Sの供給を受け、画像情報D11Sに基づいて、表示対象を所定の位置から表示対象が立体視できるような画像情報D11Pと画像情報D12Pを生成する。
 立体画像生成部220Bは、所定の位置に応じて表示対象の輪郭部の位置を調整した画像情報であって、両眼視差を発生させる光学部としてレンチキュラーレンズ13を有しており、その光学部によって表示部11Bに表示させる第1の画像情報を生成してもよい。
The stereoscopic image generation unit 220B according to the present embodiment uses the image information D11P and the image information D12P such that the display target displayed on the display unit 11B and the display unit 12B can be stereoscopically viewed from a predetermined position by binocular parallax. Is generated. The predetermined position is a position where the visual recognition position of the observer 1 can emphasize the outline most, and the observer 1 can visually recognize the stereoscopic image. Accordingly, the stereoscopic image generation unit 220B generates image information D11P and image information D12P in which the position of the contour portion to be displayed displayed by the image information D11P is adjusted according to the predetermined position.
For example, the stereoscopic image generation unit 220B receives the supply of the image information D11S for display on the display unit 11B and the display unit 12B, and the display target can be stereoscopically viewed from a predetermined position based on the image information D11S. Such image information D11P and image information D12P are generated.
The stereoscopic image generation unit 220B is image information in which the position of the contour portion to be displayed is adjusted according to a predetermined position, and includes the lenticular lens 13 as an optical unit that generates binocular parallax. The first image information to be displayed on the display unit 11B may be generated.
 本実施形態における輪郭補正部210Bは、判定部213Bと、補正部211Bとを備える。
 判定部213Bは、例えば、表示対象の輪郭部に含まれる画像情報D11Pと画像情報D12Pの補正において、例えば、表示部11Bに表示する表示対象の輪郭部の位置が、表示部11Bの画素のうち隣り合う第1画素と第2画素との範囲に掛かるか否かを判定する。2眼型レンチキュラーレンズの場合、表示部11Bの画素のうち隣り合う第1画素と第2画素とは、表示部11Bにおける隣り合う列にそれぞれ設けられた画素である。
 例えば、判定部213Bは、この判定結果から、表示部11Bに表示する表示対象の輪郭部の位置が、表示部11Bの画素のうち隣り合う第1画素と第2画素との範囲に掛かる場合に補正が必要と判定し、表示部11の画素のうち隣り合う第1画素と第2画素との範囲に掛からない場合に補正が不要と判定する。隣り合う第1表示領域と第2表示領域とは、立体視差が生じる方向(水平方向)に並べて配置される領域である。
The contour correction unit 210B in the present embodiment includes a determination unit 213B and a correction unit 211B.
For example, in the correction of the image information D11P and the image information D12P included in the contour portion to be displayed, the determination unit 213B determines that the position of the contour portion to be displayed on the display portion 11B is the pixel of the display portion 11B. It is determined whether or not it falls within the range between the adjacent first pixel and second pixel. In the case of a two-lens lenticular lens, the first and second pixels adjacent to each other in the display unit 11B are pixels provided in adjacent columns in the display unit 11B.
For example, the determination unit 213B determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11B falls within the range of the first and second pixels adjacent to each other among the pixels of the display unit 11B. It is determined that correction is necessary, and it is determined that correction is not necessary when the range of the adjacent first pixel and second pixel among the pixels of the display unit 11 is not applied. The adjacent first display area and second display area are areas arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
 後述の補正部211Bにおける補正処理を制御する条件を定めるための判定を行う。判定部213Bは、例えば、表示対象の輪郭部に含まれる画像情報D11Pの補正において、表示部11Bに表示する表示対象の輪郭部の位置が、表示部11Bの画素のうち隣り合う第1画素と第2画素との範囲に掛かるか否かを判定する。
 例えば、判定部213Bは、この判定結果から、表示部11Bに表示する表示対象の輪郭部の位置が、表示部11Bの画素のうち隣り合う第1画素と第2画素との範囲に掛かる場合に補正が必要と判定し、表示部11Bの画素のうち隣り合う第1画素と第2画素との範囲に掛からない場合に補正が不要と判定する。隣り合う第1画素と第2画素とは、立体視差が生じる方向(水平方向)に並べて配置される画素(表示領域)である。
A determination is made to determine conditions for controlling correction processing in the correction unit 211B described later. For example, in the correction of the image information D11P included in the contour portion to be displayed, the determination unit 213B has the position of the contour portion to be displayed displayed on the display portion 11B as a first pixel adjacent to the pixel of the display portion 11B. It is determined whether or not it falls within the range of the second pixel.
For example, the determination unit 213B determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11B falls within the range of the first and second pixels adjacent to each other among the pixels of the display unit 11B. It is determined that correction is necessary, and it is determined that correction is unnecessary when the range of the first pixel and the second pixel adjacent to each other among the pixels of the display unit 11B is not applied. The adjacent first pixel and second pixel are pixels (display areas) arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
 補正部211Bは、判定部213における上記の判定結果により、第1画素と第2画素との範囲に表示対象の輪郭部の位置が掛かると判定される場合に、表示装置10Bに表示する表示対象の輪郭部の補正を行う。補正部211Bは、補正を行う際に、例えば、第1画素と第2画素とのうち少なくとも何れかの画素に表示する画像情報D11Pを、所定の位置と、表示部11が有する2次元に配列された複数の画素の位置と、表示対象の輪郭部に対応する表示部11における画素位置(表示領域の位置)とに基づいて補正する。 The correction unit 211B displays a display target to be displayed on the display device 10B when it is determined that the position of the contour of the display target is within the range between the first pixel and the second pixel based on the determination result in the determination unit 213. Correction of the contour portion of. When the correction unit 211B performs correction, for example, image information D11P to be displayed on at least one of the first pixel and the second pixel is arranged in a two-dimensional manner with a predetermined position and the display unit 11 The correction is performed based on the positions of the plurality of pixels and the pixel position (display area position) in the display unit 11 corresponding to the contour part to be displayed.
 補正部211Bは、第1画素と第2画素との何れかに表示する画像情報D11Pを、第1画素と第2画素とを対にして定める画像情報D11Pの補正量に応じて補正する。
 補正部211Bは、第1画素と第2画素との何れかに表示する画像情報D12Pを、第1画素と第2画素とを対にして定める画像情報D12Pの補正量に応じて補正する。
 これにより、輪郭補正部210Bは、表示部11Bと表示部12Bとを見る観察者1(ユーザ)の位置が、両眼視差により表示対象を立体視できる所定の位置となるように画像情報D11Pと画像情報D12Pとを補正することができる。
The correction unit 211B corrects the image information D11P to be displayed on either the first pixel or the second pixel according to the correction amount of the image information D11P determined by pairing the first pixel and the second pixel.
The correction unit 211B corrects the image information D12P to be displayed on either the first pixel or the second pixel according to the correction amount of the image information D12P determined by pairing the first pixel and the second pixel.
As a result, the contour correction unit 210B has the image information D11P so that the position of the observer 1 (user) who views the display unit 11B and the display unit 12B becomes a predetermined position where the display target can be stereoscopically viewed by binocular parallax. The image information D12P can be corrected.
 上記のとおり本実施形態における輪郭補正部210Bは、輪郭補正部210Bに供給される画像情報に応じて、供給される画像情報のうち少なくとも何れかの画像情報を補正して出力する。例えば、輪郭補正部210が行う処理の対象とする画像情報には、画像情報D11P(第1の画像情報)と、画像情報D12P(第2の画像情報)とがある。 As described above, the contour correction unit 210B according to the present embodiment corrects and outputs at least one of the supplied image information according to the image information supplied to the contour correction unit 210B. For example, the image information to be processed by the contour correcting unit 210 includes image information D11P (first image information) and image information D12P (second image information).
 画像情報D11Pは、表示部11(第1表示部)と表示部12(第2表示部)とに表示する表示対象を両眼視差により所定の位置に立体表示する画像情報のうち表示部11と表示部12とのうちの何れか一方に表示させる画像情報である。また、画像情報D12Pは、上記の表示対象を両眼視差により所定の位置に立体表示する画像情報のうち表示部11と表示部12とのうちの何れか他方に表示させる画像情報である。 The image information D11P includes the display unit 11 and the display unit 11 among the image information that stereoscopically displays a display target to be displayed on the display unit 11 (first display unit) and the display unit 12 (second display unit) at a predetermined position by binocular parallax. This is image information to be displayed on any one of the display unit 12. The image information D12P is image information that is displayed on either the display unit 11 or the display unit 12 among the image information for stereoscopically displaying the display target at a predetermined position by binocular parallax.
 なお、本実施形態における輪郭補正部210Bは、第1の表示形態の場合、上記の画像情報D11P(第1の画像情報)を補正して画像情報D11を生成し、画像情報D12P(第2の画像情報)を補正して画像情報D12を生成する。輪郭補正部210Bは、第2の表示形態の場合、上記の画像情報D11P(第1の画像情報)を補正して画像情報D12を生成し、画像情報D12P(第2の画像情報)を補正して画像情報D11を生成する。
 要するに、輪郭補正部210Bは、表示形態に応じて、画像情報D11P(第1の画像情報)と画像情報D12P(第2の画像情報)と、画像情報D11と画像情報D12との対応関係を切り替えて出力する。
In the first display mode, the contour correction unit 210B in the present embodiment corrects the image information D11P (first image information) to generate the image information D11, and the image information D12P (second image information). Image information D12 is generated by correcting (image information). In the case of the second display mode, the contour correction unit 210B corrects the image information D11P (first image information) to generate image information D12, and corrects the image information D12P (second image information). To generate image information D11.
In short, the outline correction unit 210B switches the correspondence between the image information D11P (first image information) and the image information D12P (second image information) and the image information D11 and the image information D12 according to the display form. Output.
 本実施形態における補正の方法は、第1の実施形態における補正の方法が適用できる。
 さらに、本実施形態における輪郭補正部210Bは、以下に示すように、観察者1の移動量に応じて複数の補正方法を適用して補正する場合がある。
As the correction method in the present embodiment, the correction method in the first embodiment can be applied.
Further, the contour correction unit 210B in the present embodiment may perform correction by applying a plurality of correction methods according to the movement amount of the observer 1 as described below.
 前述の図18に示すレンチキュラーレンズ方式の表示装置は、レンチキュラーレンズ13の光学特性に依存して立体画像を観察できる領域が制限される。
 以下、図19を参照して、その領域に基づいて補正する方法を選択する一実施例として、レンチキュラーレンズ13の光学特性に応じて定まる立体画像を観察できる領域から外れる領域まで観察者が移動する場合の画像の補正方法について説明する。
 図19は、レンチキュラーレンズ13の光学特性に応じて定まる立体画像を観察できる領域から外れる領域まで観察者が移動する場合の補正方法を説明する図である。この図19に図示される表示装置10Bにおいて、表示部11Bを列S1で示し、表示部12Bを列S2で示す。この図19には、表示装置10Bと、X軸方向の異なる位置にいる観察者1(1’)が示される。
In the lenticular lens type display device shown in FIG. 18 described above, an area where a stereoscopic image can be observed is limited depending on the optical characteristics of the lenticular lens 13.
Hereinafter, referring to FIG. 19, as an example of selecting a correction method based on the area, the observer moves to an area outside the area where the stereoscopic image determined according to the optical characteristics of the lenticular lens 13 can be observed. An image correction method in this case will be described.
FIG. 19 is a diagram for explaining a correction method in a case where the observer moves from a region where a stereoscopic image determined according to the optical characteristics of the lenticular lens 13 can be observed to a region outside the region. In the display device 10B illustrated in FIG. 19, the display unit 11B is indicated by a column S1, and the display unit 12B is indicated by a column S2. FIG. 19 shows the display device 10B and the observer 1 (1 ′) at different positions in the X-axis direction.
 Ma(i)(又はMa(i+1))に観察者1が位置する場合、観察者1の左眼が領域Z1(又は領域Z3)の中にあり、右眼が領域Z2(又は領域Z4)の中にある。領域Z1(又は領域Z3)は、表示部11Bの列S1を観察できる範囲を示し、領域Z2(又は領域Z4)が表示部12Bの列S2を観察できる範囲を示す。観察者1の左眼が領域Z1(又は領域Z3)内に位置しており、右眼が領域Z2(又は領域Z4)内に位置しており、さらに表示部11B(列S1)に左眼に提示する画像が表示され、表示部12B(列S2)に右眼に提示する画像が表示される第1の表示形態にある場合、観察者1は立体画像を観察できる。
 この第1の表示形態において、観察者1の左眼が領域Z1(又は領域Z3)から外れた領域に位置したり、右眼が領域Z2(又は領域Z4)から外れた領域に位置したりすることにより、観察者1は立体画像を観察できなくなる。要するに、第1の表示形態により画像を表示して立体画像を観察できるように表示していても、観察者1の位置がMa(i)からMa(i+1)まで移動する間に立体画像を観察できなくなる領域が存在する。
When the observer 1 is located at Ma (i) (or Ma (i + 1)), the left eye of the observer 1 is in the area Z1 (or area Z3) and the right eye is in the area Z2 (or area Z4). Is inside. Region Z1 (or region Z3) indicates a range in which the column S1 of the display unit 11B can be observed, and region Z2 (or region Z4) indicates a range in which the column S2 of the display unit 12B can be observed. The left eye of the observer 1 is located in the region Z1 (or region Z3), the right eye is located in the region Z2 (or region Z4), and the left eye is displayed on the display unit 11B (column S1). In the first display mode in which an image to be presented is displayed and an image to be presented to the right eye is displayed on the display unit 12B (column S2), the observer 1 can observe a stereoscopic image.
In the first display mode, the left eye of the observer 1 is located in an area outside the area Z1 (or area Z3), or the right eye is located in an area outside the area Z2 (or area Z4). As a result, the observer 1 cannot observe the stereoscopic image. In short, even if the image is displayed in the first display form so that the stereoscopic image can be observed, the stereoscopic image is observed while the position of the observer 1 moves from Ma (i) to Ma (i + 1). There are areas that cannot be done.
 そこで、本実施形態の表示システム100Bは、第1の表示形態において存在する立体画像を観察できなくなる領域のうちの一部の領域を、立体画像を観察できる領域にする処理を実施する。例えば、第1の表示形態において存在する立体画像を観察できなくなる領域のうちの一部の領域は、次の条件を満たす場合には、表示された画像を立体画像として観察することができるようになる。例えば、観察者1の位置がMa’(i)に位置する場合のように、観察者1の左眼が領域Z2の内側に位置しており、右眼が領域Z3の内側に位置している場合に、第1の表示形態の代わりに第2の表示形態による表示にして立体画像として観察できるように表示することが可能になる。第2の表示形態による表示は、第1の表示形態による表示に対し、観察者1の左眼と右眼とに提示する画像を表示する位置が逆にした表示形態である。
 このように、観察者1の位置に応じて、表示装置10Bの表示状態を切り替えることにより、第1の表示状態の基では立体画像を観察できない一部の領域を、第2の表示状態の基で立体画像を観察できる領域にすることができる。
Therefore, the display system 100B according to the present embodiment performs a process in which a part of the area in which the stereoscopic image existing in the first display form cannot be observed is changed to an area in which the stereoscopic image can be observed. For example, a part of the regions in which the stereoscopic image existing in the first display form cannot be observed can be observed as a stereoscopic image when the following condition is satisfied. Become. For example, as in the case where the position of the observer 1 is located at Ma ′ (i), the left eye of the observer 1 is located inside the area Z2 and the right eye is located inside the area Z3. In this case, instead of the first display form, it is possible to display the second display form so that it can be observed as a stereoscopic image. The display according to the second display form is a display form in which the positions for displaying the images presented to the left eye and the right eye of the observer 1 are reversed with respect to the display according to the first display form.
In this way, by switching the display state of the display device 10B according to the position of the observer 1, a part of the region in which the stereoscopic image cannot be observed based on the first display state can be changed based on the second display state. With this, an area where a stereoscopic image can be observed can be made.
 上記のようにレンチキュラーレンズ13の光学特性に応じて定まる立体画像を観察できる領域から外れる領域まで観察者1が移動する場合には、輪郭補正部210Bが表示形態を切り替えるとよい。 As described above, when the observer 1 moves to a region outside the region where the stereoscopic image determined according to the optical characteristics of the lenticular lens 13 can be observed, the contour correction unit 210B may switch the display form.
 次に、図20を参照して、観察者が移動する場合の補正方法について説明する。
 図20は、観察者が移動する場合の補正方法について説明する図である。
 以下、前述の図19において示した観察者1の位置がMa(i)からMa’(i)に至るまでに表示装置10Bに表示する立体画像について説明する。
 図20(a)は、観察者1が、Ma(i)の位置から対象画像を観察した場合に観察される立体画像を模式化して示している。
 図20(a)から図20(d)までの図は、観察者1が表示装置10Bを観察した状態のままで、Ma(i)の位置からMa’(i)までX軸にそって移動する場合に観察される立体画像を模式化して示している。
 上段には、前述の図19のように表示装置10Bと観察者1との位置関係を示す。
 上記上段より下の段には、上から順に、観察者1の位置に応じて補正する左目に提示する画像のエッジ(輪郭画像PE1と輪郭画像PE2)の輝度、画像情報D11P(D12P)の輝度、左目に提示する画像の輝度、右目に提示する画像の輝度を示す。
Next, a correction method when the observer moves will be described with reference to FIG.
FIG. 20 is a diagram illustrating a correction method when the observer moves.
Hereinafter, a stereoscopic image displayed on the display device 10B until the position of the observer 1 shown in FIG. 19 is from Ma (i) to Ma ′ (i) will be described.
FIG. 20A schematically shows a stereoscopic image observed when the observer 1 observes the target image from the position of Ma (i).
The diagrams from FIG. 20A to FIG. 20D move along the X axis from the position of Ma (i) to Ma ′ (i) while the observer 1 observes the display device 10B. The three-dimensional image observed in the case of doing is shown typically.
The upper part shows the positional relationship between the display device 10B and the observer 1 as shown in FIG.
From the top to the bottom, the brightness of the edges (contour image PE1 and contour image PE2) of the image presented to the left eye to be corrected according to the position of the observer 1 and the brightness of the image information D11P (D12P) are sequentially displayed from the top. The brightness of the image presented to the left eye and the brightness of the image presented to the right eye are shown.
 本実施形態の場合、輪郭画像PE1と輪郭画像PE2の補正量は、輪郭補正部210Bが画像情報D11と画像情報D12とのそれぞれを基にして算出する。この図20の説明においては、画像情報D11Pから画像情報D11を得るまでの説明を代表して行い、画像情報D12についてはその結果だけを示すこととする。
 要するに、画像情報D11Pの輝度に、画像情報D11Pに基づいて生成された輪郭画像PE1と輪郭画像PE2の輝度を加算して左眼に提示する画像の輝度を算出する。例えば、画像情報D11の輝度は、前述の図6に示すように観察者1の左眼に明るさIMLを知覚させる画像に応じたものとする。このように、対象物のエッジ部分(左辺エッジ部分E1と右辺エッジ部分E2)が強調された画像が生成されて画像情報D11として表示部11Bに表示される。また、同様に、画像情報D12Pに基づいて右眼に提示する画像の輝度が算出される。
 その結果、両眼でそれぞれ観察される画像は、エッジ部の輝度又は幅を補正した両眼視差を生じるものにする。例えば、観察者1に知覚させる画像は、前述の図7に示す画像情報D11と画像情報D12とのようにエッジ部の輝度がそれぞれ異なるものにする。
 図20(a)においては、観察者1がMa(i)に位置しており、観察者1の実質的正面に表示される矩形の対象物のエッジ部分(左辺エッジ部分E1と右辺エッジ部分E2)に、輪郭を補正する輪郭画像PE1と輪郭画像PE2とがそれぞれ生成された画像が表示される。例えば、エッジ部分の補正における輪郭画像PE1の輝度と輪郭画像PE2の輝度とを等しくなるようにして補正された画像が表示される。なお、図20(a)における表示形態は、例えば、図19において前述した第1の表示形態とする。
In the present embodiment, the correction amount of the contour image PE1 and the contour image PE2 is calculated by the contour correction unit 210B based on each of the image information D11 and the image information D12. In the description of FIG. 20, the description until the image information D11 is obtained from the image information D11P will be representative, and only the result of the image information D12 will be shown.
In short, the brightness of the image presented to the left eye is calculated by adding the brightness of the contour image PE1 and the contour image PE2 generated based on the image information D11P to the brightness of the image information D11P. For example, the luminance of the image information D11 corresponds to an image that causes the left eye of the observer 1 to perceive brightness IML as shown in FIG. In this way, an image in which the edge portions of the object (the left side edge portion E1 and the right side edge portion E2) are emphasized is generated and displayed on the display unit 11B as the image information D11. Similarly, the luminance of the image presented to the right eye is calculated based on the image information D12P.
As a result, the images observed with both eyes each cause binocular parallax with the luminance or width of the edge portion corrected. For example, the image perceived by the observer 1 is such that the brightness of the edge portion is different as in the image information D11 and the image information D12 shown in FIG.
In FIG. 20A, the observer 1 is located at Ma (i), and the edge parts (left edge part E1 and right edge part E2) of the rectangular object displayed substantially in front of the observer 1 are shown. ), An image in which a contour image PE1 and a contour image PE2 for correcting the contour are generated is displayed. For example, an image corrected so that the brightness of the contour image PE1 and the brightness of the contour image PE2 in the correction of the edge portion are equal to each other is displayed. The display form in FIG. 20A is, for example, the first display form described above with reference to FIG.
 図20(b)に示すような位置まで観察者1の位置がX軸方向(Ma’(i)の方向)に移動した場合には、観察者1が見込む対象物を見込む方向が、図20(a)の場合に見込む方向に比べて観察者1から向かって左寄りに観察されるようになる。この図20(b)に示す場合の観察者1の位置は、Ma(i)からMa’(i)までの半分以下の範囲にあり、Ma(i)からMa’(i)までの半分の位置近傍の所定の範囲内にあるものとする。このように、観察者1の移動が比較的少ない場合であっても、対象物を見込む方向が変化する。そこで、対象物を見込む方向が変化を低減するために、観察者1の移動量に応じて、輪郭画像PE1と輪郭画像PE2の補正量を調整する。この図に示されるように、基準となる位置(Ma)からX軸方向に観察者1が移動する場合には、観察者1が移動した方向と同じ方向にあるエッジ部分の輝度を高めに表示するように補正する。例えば、観察者1の移動量に応じた輪郭画像PE1と輪郭画像PE2の補正量を調整は、前述の各種方式が適用できる。ここでは、前述の図13に示すような補正の方法を適用した場合について説明する。要するに、観察者1の移動量に応じて、補正前の輪郭画像PE1と輪郭画像PE2とに対して、観察者1が移動した方向(X軸方向)に、補正前の輪郭画像PE1と輪郭画像PE2と等量のエッジを追加する補正を行う。
 このように補正することにより、表示装置10Bにおける対象物を表示する位置を変えることなく、対象物が観察者1の移動方向と同じ方向に移動したように観察者1に認識させる立体画像を生成することができる。
 要するに、図20(b)における表示形態は、図20(a)の場合と同様に第1の表示形態(図19)が維持される。
When the position of the observer 1 moves in the X-axis direction (Ma ′ (i) direction) to the position as shown in FIG. 20B, the direction in which the object 1 is expected to be seen by the observer 1 is as shown in FIG. In the case of (a), the image is observed to the left from the viewer 1 as compared to the direction to be viewed. The position of the observer 1 in the case shown in FIG. 20B is in the range of half or less from Ma (i) to Ma ′ (i), and is half of the position from Ma (i) to Ma ′ (i). It is assumed that it is within a predetermined range near the position. Thus, even if the observer 1 moves relatively little, the direction in which the object is viewed changes. Therefore, in order to reduce the change in the direction in which the object is viewed, the correction amounts of the contour image PE1 and the contour image PE2 are adjusted according to the movement amount of the observer 1. As shown in this figure, when the observer 1 moves in the X-axis direction from the reference position (Ma), the brightness of the edge portion in the same direction as the direction in which the observer 1 moves is displayed higher. Correct as follows. For example, the above-described various methods can be applied to adjust the correction amounts of the contour image PE1 and the contour image PE2 according to the movement amount of the observer 1. Here, a case where the correction method as shown in FIG. 13 is applied will be described. In short, the contour image PE1 before correction and the contour image in the direction (X-axis direction) in which the viewer 1 moves with respect to the contour image PE1 and contour image PE2 before correction according to the movement amount of the viewer 1. Perform correction to add an equal amount of edge to PE2.
By correcting in this way, a stereoscopic image is generated that allows the observer 1 to recognize that the object has moved in the same direction as the movement direction of the observer 1 without changing the position where the object is displayed on the display device 10B. can do.
In short, the display form in FIG. 20B is maintained in the first display form (FIG. 19) as in the case of FIG.
 図20(b)に示す位置から観察者1の位置がX軸方向にさらに移動して、かつ、前述の図20(b)までと同様の表示方法によって表示した場合には、図19に示したように、立体画像を観察できなくなる領域に達する。
 要するに、この図20(b)に示される場合は、立体画像を観察するのに適した位置のMa(i)を基準にした場合の画像から補正可能とする領域の内側で、観察者1がX軸方向に移動した場合の限界点近傍に観察者1が位置する状態を示す。
When the position of the observer 1 is further moved in the X-axis direction from the position shown in FIG. 20B and displayed by the same display method up to the above-described FIG. 20B, it is shown in FIG. As described above, it reaches an area where the stereoscopic image cannot be observed.
In short, in the case shown in FIG. 20B, the observer 1 is inside the region that can be corrected from the image when Ma (i) at a position suitable for observing the stereoscopic image is used as a reference. The state where the observer 1 is located near the limit point when moving in the X-axis direction is shown.
 ここで、図20(b)に示す位置よりさらにX軸方向(Ma’(i)の方向)に移動した場合には、図19に示したように表示装置10Bにおける表示形態を第2の表示形態に切り替える。例えば、表示形態を切り替える位置を、Ma(i)からMa’(i)までの半分の位置とする。また、図20(b)までの表示形態を第1の表示形態とする場合、切替後の表示形態は第2の表示形態になる。表示形態を切り替えた直後の状態を図20(c)に示す。
 ここで、表示形態の切り替えが実施された結果、第2の表示形態(図19)に切り替わることから、Ma(i)からMa’(i)までの半分の位置よりMa(i)から離れた領域を、立体画像を観察できる領域にしている。要するに、図20(b)における表示形態は、図20(a)の場合と同様に第1の表示形態(図19)が維持される。
 観察者1の位置がX軸方向にさらに移動して、図20(d)に示すMa’(i)の位置まで達する間も同様に立体画像を観察できる領域になる。
Here, when the position is further moved in the X-axis direction (Ma ′ (i) direction) from the position shown in FIG. 20B, the display form in the display device 10B is displayed as the second display as shown in FIG. Switch to form. For example, the position where the display form is switched is set to a half position from Ma (i) to Ma ′ (i). In addition, when the display form up to FIG. 20B is the first display form, the display form after switching is the second display form. FIG. 20C shows a state immediately after switching the display form.
Here, as a result of switching the display mode, the display mode is switched to the second display mode (FIG. 19), so that the position is separated from Ma (i) from the half position from Ma (i) to Ma ′ (i). The area is an area where a stereoscopic image can be observed. In short, the display form in FIG. 20B is maintained in the first display form (FIG. 19) as in the case of FIG.
While the position of the observer 1 further moves in the X-axis direction and reaches the position of Ma ′ (i) shown in FIG.
 このように、観察者1が、Ma(i)の位置からMa’(i)の位置まで移動する過程で、立体画像を観察できない領域を観察できる領域にしたことにより、観察者1は、上記の範囲を移動中に表示装置10Bに表示される立体画像を観察することができる。 Thus, by making the region where the stereoscopic image cannot be observed in the process in which the viewer 1 moves from the position Ma (i) to the position Ma ′ (i), the viewer 1 can A stereoscopic image displayed on the display device 10B can be observed while moving the range.
 補正量の算出にあたり、図20(c)の場合の立体画像と図20(d)の場合の立体画像とを見かけ上、同じ画像と認識できるように補正量を調整することにより、図20(a)に示すMa(i)から図20(d)に示すMa’(i)に至るまで観察者1が移動した場合においても、観察者1が見込む対象部の画像が変化していないものと認識させるようにする。
 特に、基準とされる画像を切り替える場合、切り替え前後の画像の明るさなどの画質を合わせるようにする。
In calculating the correction amount, by adjusting the correction amount so that the stereoscopic image in the case of FIG. 20C and the stereoscopic image in the case of FIG. Even when the observer 1 moves from Ma (i) shown in a) to Ma ′ (i) shown in FIG. 20 (d), the image of the target portion that the observer 1 sees has not changed. Make them recognize.
In particular, when switching the reference image, the image quality such as the brightness of the image before and after switching is matched.
 なお、図20に示す補正は、観察者1の移動量に応じて補正量を算出するものとして説明したが、観察者1の移動量の検出において、予め定めた代表値を基準にして観察者1の移動量を離散的な値で近似した検出値としたり、各輪郭画像の補正量を離散的な値で近似したりすることにより、この補正処理に必要とされる演算量を低減することができる。
 例えば、図20(a)に示すMa(i)から図20(d)に示すMa’(i)に至るまでに、観察者1の位置に応じて表示可能とする画像の数を数個に制限することにより、観察者1の移動に応じた補正の演算処理の演算負荷を低減して実施することができる。
The correction shown in FIG. 20 has been described as calculating the correction amount according to the movement amount of the observer 1, but in the detection of the movement amount of the observer 1, the observer is based on a predetermined representative value. The amount of calculation required for this correction processing is reduced by using a detection value that approximates the amount of movement of 1 with a discrete value or approximating the correction amount of each contour image with a discrete value. Can do.
For example, from Ma (i) shown in FIG. 20 (a) to Ma ′ (i) shown in FIG. 20 (d), the number of images that can be displayed according to the position of the observer 1 is several. By limiting, it is possible to reduce the calculation load of the correction calculation process according to the movement of the observer 1.
 (表示装置に多眼型レンチキュラーレンズ方式を適用する場合の変形例)
 図21と図22を参照して、本実施形態の表示システム100Bにおける表示装置に2眼型レンチキュラーレンズ方式を適用する場合の変形例について説明する。
 図21は、表示システム100Bにおける表示装置10Bの変形例を説明する図である。
 図21は、表示装置10Bの断面の一部を拡大して示す。前述の実施例を示す図17C、図19においては、表示装置10Bが2眼型レンチキュラーレンズ方式の表示ディスプレイを備えるものとしたが、図21に示す本変形例における表示装置10Bは、多眼型レンチキュラーレンズ方式の表示ディスプレイを備える。
 本変形例における表示システム100Bにおいては、複数の視認位置(視認領域)のそれぞれから見た際に、その見た角度から立体視がそれぞれ可能になるように表示装置10Bに立体画像を表示する。
 複数の視認位置(視認領域)のそれぞれから見た角度に応じた複数の角度から撮影された画像を複数の視点に対してそれぞれ表示する。このような表示方式はインテグラル方式(多眼方式)と呼ばれる場合もある。
(Modification example when applying the multi-lens lenticular lens method to the display device)
With reference to FIG. 21 and FIG. 22, a modified example in the case of applying the binocular lenticular lens method to the display device in the display system 100 </ b> B of the present embodiment will be described.
FIG. 21 is a diagram illustrating a modification of the display device 10B in the display system 100B.
FIG. 21 shows an enlarged part of a cross section of the display device 10B. In FIG. 17C and FIG. 19 showing the above-described embodiment, the display device 10B is assumed to be provided with a binocular lenticular lens type display. However, the display device 10B in this modification shown in FIG. A lenticular lens type display is provided.
In the display system 100B according to this modification, when viewed from each of a plurality of viewing positions (viewing regions), a stereoscopic image is displayed on the display device 10B so that stereoscopic viewing is possible from the viewed angle.
Images captured from a plurality of angles corresponding to angles viewed from each of a plurality of viewing positions (viewing regions) are respectively displayed for a plurality of viewpoints. Such a display method is sometimes called an integral method (multi-view method).
 多眼型レンチキュラーレンズ方式の表示部11Bと表示部12Bは、図19に示す表示部11Bと表示部12Bとにおける列S1と列S2の領域をさらに複数の列にそれぞれ分割する。例えば、表示部11Bの列S1を5つの列(S11、S12、S13、S14、S15)に分割して、表示部12Bの列S2を5つの列(S21、S22、S23、S24、S25)に分割する。要するに、表示装置10Bの表示面10Sには、列S11、S12、S12、S14、S15、S21、S22、S23、S24、S25、・・・の順に配されている。各列には、各列の延伸方向(Y軸方向)に複数の画素が並べて配されている。これにより、5つの方向から見た立体画像をそれぞれ表示することができる。 The multi-lens type lenticular lens type display unit 11B and display unit 12B further divide the regions of the columns S1 and S2 in the display unit 11B and the display unit 12B shown in FIG. 19 into a plurality of columns. For example, the column S1 of the display unit 11B is divided into five columns (S11, S12, S13, S14, S15), and the column S2 of the display unit 12B is divided into five columns (S21, S22, S23, S24, S25). To divide. In short, the display surface 10S of the display device 10B is arranged in the order of columns S11, S12, S12, S14, S15, S21, S22, S23, S24, S25,. In each column, a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each column. As a result, stereoscopic images viewed from five directions can be displayed respectively.
 このように、5つの方向から観察した場合に、表示装置10Bに表示された立体画像を最も観察しやすい位置から観察できる。立体画像を最も観察しやすい位置からずれた位置から観察する場合には、観察者1の位置を検出し、検出した観察者1の位置に基づいて算出したずれに応じて補正を行うとよい。 Thus, when observed from five directions, the stereoscopic image displayed on the display device 10B can be observed from the position where it is most easily observed. When observing a stereoscopic image from a position that is shifted from the position at which it is most easy to observe, it is preferable to detect the position of the observer 1 and perform correction according to the deviation calculated based on the detected position of the observer 1.
 上記に示す5つの方向のうち3つの方向から観察した場合に観察できる画像を例示してその補正方法について説明する。
 図22は、3つの方向から観察した場合の画像の補正方法を示す図である。
 ここで、方向Da0、Db0、Dc0の各方向が立体画像を最も観察しやすい方向を示す。その方向を基準にした3つの立体画像がそれぞれ用意される。観察者1が表示装置10Bを見込む方向は、観察者1を検出した結果に基づいて、表示装置10との相対的な位置関係から算出できる。
The correction method will be described by exemplifying an image that can be observed when observed from three of the five directions described above.
FIG. 22 is a diagram illustrating a method of correcting an image when observed from three directions.
Here, the directions Da0, Db0, and Dc0 indicate directions in which the stereoscopic image is most easily observed. Three stereoscopic images based on the direction are prepared. The direction in which the viewer 1 looks at the display device 10B can be calculated from the relative positional relationship with the display device 10 based on the result of detecting the viewer 1.
 ここでは、奥行き方向の異なる位置に対象物(OBJ1とOBJ2)が存在する場合について説明する。観察者1から近い対象物を符号OBJ1で示し、遠い対象物を符号OBJ2で示す。
 方向Da0、Db0、Dc0から観察される立体画像の夫々は、立体視可能とするための輪郭画像として輪郭が強調されている。ここで、説明を簡単にするため、方向Da0、Db0、Dc0から観察される立体画像における輪郭の量を一律等量として以下の説明を行う。
 方向Da0から観察した場合、対象物OBJ2の形状の一部が対象物OBJ1によって遮蔽されて観察できるものとする。方向Da1から観察した場合、対象物OBJ2の形状の一部が対象物OBJ1によって遮蔽され、方向Db0の場合よりその遮蔽される範囲が広く観察できるものとする。方向Dc0から観察した場合、対象物OBJ2の形状が対象物OBJ1によって遮蔽されずに観察できるものとする。
Here, a case where the objects (OBJ1 and OBJ2) exist at different positions in the depth direction will be described. An object close to the observer 1 is indicated by reference numeral OBJ1, and an object far from the observer 1 is indicated by reference numeral OBJ2.
Each of the stereoscopic images observed from the directions Da0, Db0, and Dc0 has an emphasized outline as an outline image for enabling stereoscopic viewing. Here, in order to simplify the description, the following description will be made with the amount of contours in the stereoscopic image observed from the directions Da0, Db0, and Dc0 as equal amounts.
When observed from the direction Da0, a part of the shape of the object OBJ2 is shielded by the object OBJ1 and can be observed. When observed from the direction Da1, a part of the shape of the object OBJ2 is shielded by the object OBJ1, and the shielded range can be observed wider than in the case of the direction Db0. When observed from the direction Dc0, the shape of the object OBJ2 can be observed without being shielded by the object OBJ1.
 以上の3つの場合を順に説明する。
 まず方向Da0から観察する場合について説明する。方向Da0から観察できる立体画像を基準に、方向Da0に対してX軸の正の方向側によって観察する方向を方向Da1として示し、方向Da0に対してX軸の負の方向側によって観察する方向を方向Da2として定める。なお方向Da0の場合と同様にして、方向Db0を基準にして方向Db1、Db2、方向Dc0を基準にして方向Dc1、Dc2を定める。
The above three cases will be described in order.
First, the case of observing from the direction Da0 will be described. The direction observed from the positive direction side of the X axis with respect to the direction Da0 is indicated as the direction Da1 with reference to the stereoscopic image that can be observed from the direction Da0, and the direction observed from the negative direction side of the X axis with respect to the direction Da0. The direction is defined as Da2. As in the case of the direction Da0, the directions Db1, Db2 and the directions Dc1, Dc2 are determined based on the direction Db0.
 実際の対象物を方向Da1から観察した場合には、方向Da0から観察する場合に比べ、方向Da0の場合よりその遮蔽される範囲が広く観察される。一方、方向Da2から観察した場合には、方向Da0から観察する場合に比べ、方向Da0の場合よりその遮蔽される範囲が狭く観察される。このように、観察する方向がわずかに変化するだけでも、実際の対象物を観察した場合に観察者1は、対象物との想定的な位置関係が変化したことを観察者1は認知する。
 本変形例として示す多眼型レンチキュラーレンズ方式では、多眼型の表示装置10Bであっても、立体画像を立体視して観察可能な方向が限られている。そのため、表示装置10Bは、限られた方向に画像を提示することができるが、連続的に観察方向を変えた画像を提示することが困難である。そこで、限られた方向に提示する画像としての画像が用意されていない方向から観察する場合には、限られた方向に画像を提示するために用意されている代表的な方向の画像を基にして、その画像を補正して表示する。その補正方法について、以下に説明する。
 符号A0として示される対象物OBJ1の立体画像には、そのエッジ画像PEとして物体の左側に付加した左辺エッジ画像PE1と、右側に付加した右辺エッジ画像PE2とが付加されている。また、対象物OBJ2の立体画像には、そのエッジ画像PEとして物体の左側に付加した左辺エッジ画像PE1’と、右側に付加した右辺エッジ画像PE2’とが付加されている。
 次に、符号A1として示される対象物OBJ1の立体画像には、左辺エッジ画像PE1の輝度を高めにして、右辺エッジ画像PE2の輝度を低めにする。また、対象物OBJ2の立体画像には、左辺エッジ画像PE1’の輝度を低めにして、右辺エッジ画像PE2’の輝度を高めにする。このように補正を行うことにより、対象物OBJ1と対象物OBJ2とが、重なって見える領域が広くなって観察者1に知覚される。観察者1は、あたかもDa1の方向から見たように感じることができる。
 次に、符号A2として示される対象物OBJ1の立体画像には、左辺エッジ画像PE1の輝度を低めにして、右辺エッジ画像PE2の輝度を高めにする。また、対象物OBJ2の立体画像には、左辺エッジ画像PE1’の輝度を高めにして、右辺エッジ画像PE2’の輝度を低めにする。このように補正を行うことにより、対象物OBJ1と対象物OBJ2とが、重なって見える領域が狭くなって観察者1に知覚される。観察者1は、あたかもDa2の方向から見たように感じることができる。
When the actual object is observed from the direction Da1, the shielded range is observed wider than in the case of the direction Da0 compared to the case of observing from the direction Da0. On the other hand, when observed from the direction Da2, the shielded range is observed to be narrower than when observed from the direction Da0 compared to the direction Da0. Thus, even if the observation direction changes slightly, the observer 1 recognizes that the assumed positional relationship with the object has changed when the actual object is observed.
In the multi-lens lenticular lens system shown as this modification, even in the multi-lens display device 10B, the directions in which a stereoscopic image can be observed stereoscopically are limited. Therefore, the display device 10B can present an image in a limited direction, but it is difficult to present an image in which the observation direction is continuously changed. Therefore, when observing from a direction in which an image as an image to be presented in a limited direction is not prepared, an image in a representative direction prepared to present the image in a limited direction is used as a basis. Correct the image and display it. The correction method will be described below.
A left-side edge image PE1 added to the left side of the object and a right-side edge image PE2 added to the right side are added to the stereoscopic image of the object OBJ1 indicated by the symbol A0 as the edge image PE. Further, the left side edge image PE1 ′ added to the left side of the object and the right side edge image PE2 ′ added to the right side are added to the stereoscopic image of the object OBJ2 as the edge image PE.
Next, in the three-dimensional image of the object OBJ1 indicated by reference numeral A1, the luminance of the left side edge image PE1 is increased and the luminance of the right side edge image PE2 is decreased. In the stereoscopic image of the object OBJ2, the luminance of the left side edge image PE1 ′ is lowered and the luminance of the right side edge image PE2 ′ is increased. By performing the correction in this way, the object OBJ1 and the object OBJ2 are perceived by the observer 1 because the region where the objects OBJ2 appear to overlap is widened. The observer 1 can feel as if viewed from the direction of Da1.
Next, in the three-dimensional image of the object OBJ1 indicated by reference numeral A2, the luminance of the left side edge image PE1 is lowered and the luminance of the right side edge image PE2 is increased. In the stereoscopic image of the object OBJ2, the luminance of the left side edge image PE1 ′ is increased and the luminance of the right side edge image PE2 ′ is decreased. By performing the correction in this way, the area where the object OBJ1 and the object OBJ2 appear to overlap is narrowed and perceived by the observer 1. The observer 1 can feel as if viewed from the direction of Da2.
 続いて、符号B0、B1、B2にて参照する各図について説明する。
 実際の対象物を方向Db1から観察した場合には、方向Db0の場合よりその遮蔽される範囲が広く観察される。一方、方向Db2から観察した場合には、方向Db0の場合よりその遮蔽される範囲が狭く観察される。この傾向は、前述の方向Da0の場合と同様である。
 符号B0、B1、B2によって参照される図に示すように、前述の符号A0、A1、A2にて参照される図と同様に補正した画像を生成できる。
Then, each figure referred with code | symbol B0, B1, B2 is demonstrated.
When the actual object is observed from the direction Db1, the shielded range is observed more widely than in the direction Db0. On the other hand, when observed from the direction Db2, the shielded range is observed narrower than in the direction Db0. This tendency is the same as that in the direction Da0 described above.
As shown in the diagrams referenced by the symbols B0, B1, and B2, corrected images can be generated in the same manner as the diagrams referenced by the aforementioned symbols A0, A1, and A2.
 続いて、符号C0、C1、C2にて参照する各図について説明する。
 符号C0にて参照される図に示されるように、方向Dc0から観察される対象物OBJ1と対象物OBJ2とは、互いに遮蔽する領域がない。
 実際の対象物を方向Dc1から観察した場合には、方向Dc0の場合より対象物OBJ1と対象物OBJ2の間隔が狭く観察される。一方、方向Dc2から観察した場合には、方向Dc0の場合よりその遮蔽される範囲が広く観察される。
 符号C0、C1、C2によって参照する図を示すように、前述の符号A0、A1、A2にて参照される図と同様に補正した画像を生成する。
 その結果、符号C1として示される対象物OBJ1と対象物OBJ2との立体画像には、対象物OBJ1と対象物OBJ2と間隔が、狭くなって観察者1に知覚される。
 次に、符号C2として示される対象物OBJ1と対象物OBJ2との立体画像には、対象物OBJ1と対象物OBJ2との間隔が、広くなって観察者1に知覚される。
Then, each figure referred with code | symbol C0, C1, C2 is demonstrated.
As shown in the figure referenced by the reference C0, the object OBJ1 and the object OBJ2 observed from the direction Dc0 do not have a region that shields each other.
When the actual object is observed from the direction Dc1, the distance between the object OBJ1 and the object OBJ2 is observed to be narrower than in the direction Dc0. On the other hand, when observed from the direction Dc2, the shielded range is observed wider than in the direction Dc0.
As shown in the diagrams referenced by the symbols C0, C1, and C2, corrected images are generated in the same manner as the diagrams referenced by the symbols A0, A1, and A2.
As a result, in the three-dimensional image of the object OBJ1 and the object OBJ2 indicated as the reference C1, the distance between the object OBJ1 and the object OBJ2 is narrowed and perceived by the observer 1.
Next, in the three-dimensional image of the object OBJ1 and the object OBJ2 indicated by the reference symbol C2, the interval between the object OBJ1 and the object OBJ2 is widened and perceived by the observer 1.
 このように、表示装置10Bを観察する方向が変化する場合に表示する画像をわずかに補正するだけで、観察者1は、実際の対象物を観察した場合と同様に、対象物との相対的な位置関係が変化したことを観察者1は認知できる。 In this way, the observer 1 can make a relative correction with respect to the object in the same manner as in the case of observing the actual object only by slightly correcting the displayed image when the viewing direction of the display device 10B changes. The observer 1 can recognize that the positional relationship has changed.
 以上に示すように、本実施形態に示す表示システム100Bによれば、観察者1が立体画像を視認できる所定の位置から移動した場合であっても、立体画像の視認性を向上できる。 As described above, according to the display system 100B shown in the present embodiment, the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
[第4の実施形態]
 続いて、図23から図25を参照して、第4の実施形態における表示システム100Cについて説明する。
 図23は、本発明の一実施形態による表示システム100Cの構成を示す概略ブロック図である。図23に示す表示システム100Cは、画像処理装置2Cと表示装置10とを備える。画像処理装置2Cが、輪郭画像を生成するという特徴を有する。
[Fourth Embodiment]
Next, a display system 100C according to the fourth embodiment will be described with reference to FIGS.
FIG. 23 is a schematic block diagram showing a configuration of a display system 100C according to an embodiment of the present invention. A display system 100 </ b> C illustrated in FIG. 23 includes an image processing device 2 </ b> C and a display device 10. The image processing apparatus 2C has a feature of generating a contour image.
 画像処理装置2Cは、輪郭補正部210、立体画像生成部220C、撮像部230、検出部250、制御部260、及び、記憶部270を備えている。 The image processing apparatus 2C includes a contour correction unit 210, a stereoscopic image generation unit 220C, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
 本実施形態における輪郭補正部210は、立体画像生成部220Cから供給される画像情報(画像情報D11P、画像情報D12P)に応じて、供給される画像情報のうち少なくとも何れかの画像情報を補正して出力する。 The contour correcting unit 210 in the present embodiment corrects at least one of the supplied image information according to the image information (image information D11P, image information D12P) supplied from the stereoscopic image generating unit 220C. Output.
 本実施形態における立体画像生成部220Cは、表示部11と表示部12とに表示する表示対象を両眼視差により、所定の位置から表示対象が立体視できるような画像情報D11Pを生成する。上記の所定の位置は、前述のとおり観察者1の視認位置が輪郭を最も強調できる位置であり、観察者1が立体画像を視認できる位置である。 The stereoscopic image generation unit 220C according to the present embodiment generates image information D11P that allows the display target to be viewed stereoscopically from a predetermined position by binocular parallax, based on the display target displayed on the display unit 11 and the display unit 12. As described above, the predetermined position is a position where the viewer's 1 visual recognition position can emphasize the outline most, and the observer 1 can visually recognize the stereoscopic image.
 立体画像生成部220Cは、画像情報D11Pによって表示する表示対象の輪郭部の位置を上記の所定の位置に応じて調整した画像情報D11Pを生成する。例えば、立体画像生成部220Cは、表示部11と表示部12とに表示するための画像情報D11Sの供給を受け、画像情報D11Sに基づいて、表示対象を所定の位置から表示対象が立体視できるような画像情報D11Pを生成する。
 この際、立体画像生成部220Cは、画像情報D11Sに基づいた画像情報D12Pを出力する。或いは、立体画像生成部220Cは、画像情報D11Sに基づいて、表示対象を所定の位置から表示対象が立体視できるような画像情報D12Pを生成してもよい。
 より具体的には、立体画像生成部220Cは、画像情報D11Sに基づいて、画像情報D11Sにエッジ画像PEを付加するための画像を画像情報D11Pとする。立体画像生成部220Cは、観察者1の位置と表示装置10との位置関係により、画像情報D11Sから画像情報D11Pを生成する。例えば、立体画像生成部220Cは、画像情報D11Sを基準に画像情報D12Pを生成し、観察者1の位置と表示装置10との位置関係により、画像情報D11Sを基準にした画像情報D11Pの倍率や表示位置に応じて、画像情報D11Pを出力する。
The stereoscopic image generation unit 220C generates image information D11P in which the position of the contour portion to be displayed displayed by the image information D11P is adjusted according to the predetermined position. For example, the stereoscopic image generation unit 220C receives the supply of the image information D11S to be displayed on the display unit 11 and the display unit 12, and the display target can be stereoscopically viewed from a predetermined position based on the image information D11S. Such image information D11P is generated.
At this time, the stereoscopic image generation unit 220C outputs image information D12P based on the image information D11S. Alternatively, the stereoscopic image generation unit 220C may generate image information D12P that allows the display target to be viewed stereoscopically from a predetermined position based on the image information D11S.
More specifically, the stereoscopic image generation unit 220C sets, as the image information D11P, an image for adding the edge image PE to the image information D11S based on the image information D11S. The stereoscopic image generation unit 220C generates image information D11P from the image information D11S based on the positional relationship between the position of the observer 1 and the display device 10. For example, the stereoscopic image generation unit 220C generates the image information D12P based on the image information D11S, and the magnification of the image information D11P based on the image information D11S is determined based on the positional relationship between the position of the observer 1 and the display device 10. Image information D11P is output according to the display position.
 また、例えば、立体画像生成部220Cは、遠近法により表示する表示対象を、上記の所定の位置に応じて仮想の軸を基準に回転させた像として表示させる画像情報D11Pを生成してもよい。この場合、立体画像生成部220Cは、表示部11と表示部12とに表示するための情報(D11S)の供給を受け、供給された情報(D11S)に基づいて、表示対象を両眼視差により所定の位置から表示対象が立体視できるような画像情報D11Pを生成する。このように、表示対象を、軸を基準に回転させた像として表示させる処理は、3次元情報を有する情報に適用することができる。生成方法の詳細は、後述する。 Further, for example, the stereoscopic image generation unit 220C may generate the image information D11P that displays the display target displayed by the perspective method as an image rotated with reference to the virtual axis according to the predetermined position. . In this case, the stereoscopic image generation unit 220C receives supply of information (D11S) to be displayed on the display unit 11 and the display unit 12, and based on the supplied information (D11S), the stereoscopic image generation unit 220C determines the display target by binocular parallax. Image information D11P is generated so that the display target can be viewed stereoscopically from a predetermined position. Thus, the process of displaying the display target as an image rotated with respect to the axis can be applied to information having three-dimensional information. Details of the generation method will be described later.
 なお、立体画像生成部220Cは、画像情報D11Pに基づいて表示される表示対象の形状を、上記の所定の位置の変位に応じて変形させることにより、表示対象の輪郭部の位置を設定した画像情報D11Pを生成してもよい。また、立体画像生成部220Cは、表示部11に表示させる画像P11と表示部12に表示させる画像P12とのうち一方の画像を他方の画像に透過させて表示対象を表示する画像情報を、上記の所定の位置に応じて表示対象の輪郭部の位置を調整して生成してもよい。
 本実施形態においては、図1に示すように表示部12から表示部12の法線方向(-Z方向)に隔てて表示部11が配置されている。この場合、表示部12が透過型の表示部である。要するに、立体画像生成部220Cは、表示部11に表示させる画像P11を、表示部12に表示させる画像P12に透過させて表示するように、上記の所定の位置に応じて表示対象の輪郭部の位置を調整した画像情報として画像情報D11Pを生成する。
Note that the stereoscopic image generating unit 220C deforms the shape of the display target displayed based on the image information D11P according to the displacement of the predetermined position, thereby setting the position of the contour part of the display target. Information D11P may be generated. In addition, the stereoscopic image generation unit 220C displays image information for displaying a display target by transmitting one image through the other image among the image P11 displayed on the display unit 11 and the image P12 displayed on the display unit 12. The position of the contour portion to be displayed may be adjusted according to the predetermined position.
In the present embodiment, as shown in FIG. 1, the display unit 11 is arranged at a distance from the display unit 12 in the normal direction (−Z direction) of the display unit 12. In this case, the display unit 12 is a transmissive display unit. In short, the stereoscopic image generation unit 220C displays the contour of the display target according to the predetermined position so that the image P11 displayed on the display unit 11 is transmitted through the image P12 displayed on the display unit 12 and displayed. Image information D11P is generated as image information whose position has been adjusted.
 このようにして、立体画像生成部220Cは、表示部12から表示部12の法線方向(-Z方向)に隔てて配置される表示部11に表示させる画像情報D11Pを生成する。
 なお、画像情報D11Pを生成するにあたり、立体画像生成部220Cは、上記の所定の位置に立体表示する画像情報のうち表示部12に画像を表示させる画像情報D12Pから表示対象の輪郭部を示す情報を抽出し、抽出した情報を画像情報D11Pとして生成する。
In this way, the stereoscopic image generation unit 220C generates the image information D11P to be displayed on the display unit 11 that is arranged at a distance from the display unit 12 in the normal direction (−Z direction) of the display unit 12.
In generating the image information D11P, the three-dimensional image generation unit 220C includes information indicating a contour portion to be displayed from the image information D12P for displaying an image on the display unit 12 among the image information displayed in a three-dimensional manner at the predetermined position. And the extracted information is generated as image information D11P.
(観察者の移動に合わせた立体画像の生成)
 立体画像生成部220Cは、基となる画像情報(画像情報D11S)から立体画像表示を行う特徴点を抽出し、抽出した特徴点の部分を強調して立体画像表示させる立体画像を生成する。
 立体画像生成部220Cが基とする画像情報D11Sは、静止画、CG画、動画のうちの何れであってもよい。
 基となる画像情報D11Sから抽出する立体画像表示を行う特徴点は、基となる画像情報D11Sに応じて最適化することができる。
(Generation of stereoscopic images according to the movement of the observer)
The stereoscopic image generating unit 220C extracts feature points for displaying a stereoscopic image from the base image information (image information D11S), and generates a stereoscopic image for emphasizing the extracted feature points to display a stereoscopic image.
The image information D11S based on the stereoscopic image generation unit 220C may be any of a still image, a CG image, and a moving image.
The feature points for performing the stereoscopic image display extracted from the base image information D11S can be optimized according to the base image information D11S.
 まず、基となる画像の一例として、静止画の場合について説明する。
(1:静止画の場合)
 静止画の場合、立体画像生成部220Cは、画像に関連付けられた情報または、予め定められた情報に基づいて、主要被写体に応じた処理を選択することができる。
First, a case of a still image will be described as an example of a base image.
(1: For still images)
In the case of a still image, the stereoscopic image generation unit 220C can select processing corresponding to the main subject based on information associated with the image or predetermined information.
 (1-1:静止画(ポートレート)の場合)
 基となる画像が静止画である第1の態様として、人物を主たる被写体とするポートレートがある。画像処理装置2C(立体画像生成部220C)は、基となる画像がポートレートである場合、人物、又は、ピントの合っている被写体を、立体画像表示を行う特徴点として抽出する。なお、基となる画像から人物を抽出する手法、及び、基となる画像からピントの合っている被写体を抽出する手法は、既知の手法を適用することができる。画像処理装置2C(立体画像生成部220C)は、抽出した特徴点(人物など)に基づいて、特徴点に対応する主要被写体を立体表示するように立体画像情報を生成する。
(1-1: Still image (portrait))
As a first mode in which the base image is a still image, there is a portrait in which a person is a main subject. When the base image is a portrait, the image processing apparatus 2C (stereoscopic image generation unit 220C) extracts a person or a focused subject as a feature point for displaying a stereoscopic image. A known method can be applied to a method for extracting a person from a base image and a method for extracting a focused subject from a base image. The image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information based on the extracted feature points (such as a person) so as to stereoscopically display the main subject corresponding to the feature points.
 (1-2:静止画(予め主要被写体が特定されている画像)の場合)
 基となる画像が静止画である第2の態様として、撮影時に主要被写体であることが設定されている画像がある。画像処理装置2Cは、基となる画像が、撮影時に主要被写体であることが設定されている画像である場合、その主要被写体を、立体画像表示を行う特徴点として抽出する。なお、主要被写体を示す情報は、例えば、画像に関連付けられているメタ情報等によって示される。画像処理装置2C(立体画像生成部220C)は、抽出した主要被写体を立体表示するように立体画像情報を生成する。
(1-2: Still image (image in which main subject is specified in advance))
As a second mode in which the base image is a still image, there is an image that is set to be a main subject at the time of shooting. When the base image is an image that is set to be a main subject at the time of shooting, the image processing device 2C extracts the main subject as a feature point for performing stereoscopic image display. Note that the information indicating the main subject is indicated by, for example, meta information associated with the image. The image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information so that the extracted main subject is stereoscopically displayed.
 (1-3:静止画(画像の特徴に基づいて特徴点が抽出できる画像)の場合)
 基となる画像が静止画である第3の態様として、画像の特徴に基づいて特徴点が抽出できる画像がある。例えば、主要被写体や、主要被写体の背景が黄金比に基づいて構成されている場合に、画像の特徴に基づいて特徴点が抽出できる。画像処理装置2Cは、基となる画像が、上記に例示した画像の特徴に基づいて特徴点が抽出できる画像である場合、黄金比に基づいて想定される位置に分割線が存在するか否かを判定し、判定結果に基づいて特徴点を検出することができる。
 また、画像処理装置2Cは、画面内の配置に応じて特徴点を抽出してもよい。例えば、画面の隅に配置されているものは、中央に配置されているものより抽出する優先度を下げるなどの設定を行う。このように優先度設定を行うことにより、画面の隅から特徴点が抽出される確率を低減させることにより、特徴点が抽出される位置を偏らないように設定することが可能となる。
 なお、画像処理装置2C(立体画像生成部220C)は、抽出した画像の特徴点に基づいて主要被写体を特定し、特定した主要被写体を立体表示するように立体画像情報を生成する。このような手法から抽出できる場合の例として、建物や、風景画などが挙げられる。
(1-3: Still image (image from which feature points can be extracted based on image features))
As a third aspect in which the base image is a still image, there is an image from which feature points can be extracted based on the features of the image. For example, when the main subject and the background of the main subject are configured based on the golden ratio, feature points can be extracted based on the features of the image. When the base image is an image from which feature points can be extracted based on the image characteristics exemplified above, the image processing apparatus 2C determines whether or not there is a dividing line at a position assumed based on the golden ratio. And feature points can be detected based on the determination result.
The image processing apparatus 2C may extract feature points according to the arrangement in the screen. For example, for the one arranged at the corner of the screen, a setting such as lowering the priority of extraction from the one arranged at the center is performed. By setting the priority in this way, it is possible to set the position where the feature points are extracted so as not to be biased by reducing the probability that the feature points are extracted from the corners of the screen.
Note that the image processing apparatus 2C (stereoscopic image generation unit 220C) specifies a main subject based on the feature points of the extracted image, and generates stereoscopic image information so that the specified main subject is stereoscopically displayed. Examples of cases that can be extracted from such a technique include buildings and landscape pictures.
 (1-4:静止画(風景画)の場合)
 基となる画像が静止画である第4の態様として、風景が主たる被写体である画像がある。
 画像処理装置2Cは、基となる画像が、風景が主たる被写体であると判定される場合、遠方の景色を優先して特徴点として抽出してもよい。或いは、画像処理装置2Cは、風景が主たる被写体である画像であっても、手前側にピントの合った被写体が検出できる場合には、手前の被写体を優先して特徴点として抽出してもよい。この選択は、設定によって切り替られるようにしてもよい。
(1-4: Still image (landscape image))
As a fourth mode in which the base image is a still image, there is an image whose scenery is the main subject.
The image processing apparatus 2 </ b> C may extract the distant scenery as a feature point preferentially when it is determined that the base image is the main subject. Alternatively, the image processing apparatus 2 </ b> C may extract the feature point preferentially in the foreground subject when the subject in focus on the near side can be detected even if the image is the main subject in the landscape. . This selection may be switched by setting.
 (1-5:静止画(間欠的に撮影された画像)の場合)
 基となる画像が静止画である第5の態様として、間欠的に撮影された画像がある。画像処理装置2Cは、基となる画像が間欠的に撮影された画像である場合、主要被写体を複数の画像の差分情報に基づいて主要被写体を抽出することができる。
 画像処理装置2Cは、例えば、画面内の移動量が大きな被写体や、連続する複数の画像に撮影されている被写体を特徴点として抽出することが可能となる。連続する複数の画像に撮影されている画像から、連続して撮影された画像から、背景が移動する特徴と異なる特徴を示して移動(移動量、方向)する被写体を抽出する処理は、「フレーム間差分」を応用したり、移動量をベクトル処理したりする手法が知られている。画像処理装置2C(立体画像生成部220C)は、抽出した被写体や被写体の特徴点に基づいて主要被写体を特定し、特定した主要被写体を立体表示するように立体画像情報を生成する。
(1-5: Still images (images taken intermittently))
As a fifth aspect in which the base image is a still image, there is an image photographed intermittently. When the base image is an intermittently captured image, the image processing device 2C can extract the main subject based on the difference information of the plurality of images.
For example, the image processing apparatus 2C can extract a subject having a large amount of movement in the screen or a subject photographed in a plurality of continuous images as a feature point. The process of extracting a moving subject (moving amount, direction) showing a feature different from the feature that the background moves from an image photographed in a plurality of consecutive images is performed by “frame There are known methods for applying “difference between” and vector processing of the movement amount. The image processing apparatus 2C (stereoscopic image generation unit 220C) specifies a main subject based on the extracted subject and the feature points of the subject, and generates stereoscopic image information so that the specified main subject is stereoscopically displayed.
 続いて、基となる画像の一例として、動画像の場合について説明する。
(2:動画像の場合)
 基となる画像が動画像である場合、撮影時の履歴情報による焦点距離情報と、被写体までの距離情報とから主要被写体を抽出することができる。
 画像処理装置2Cは、例えば、撮影時の履歴情報による焦点距離情報と距離情報とを利用して、撮影時に焦点を合わせた位置にある被写体を主要被写体として抽出する。画像処理装置2C(立体画像生成部220C)は、抽出した主要被写体を立体表示するように立体画像情報を生成する。
Next, a case of a moving image will be described as an example of a base image.
(2: For moving images)
When the base image is a moving image, the main subject can be extracted from the focal length information based on the history information at the time of shooting and the distance information to the subject.
For example, the image processing apparatus 2C uses the focal length information and the distance information based on the history information at the time of shooting to extract the subject at the focused position at the time of shooting as the main subject. The image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information so that the extracted main subject is stereoscopically displayed.
 続いて、基となる画像の一例として、コンピュータグラフィクス画像(CG画像)の場合について説明する。
(3:CG画像の場合)
 基となる画像がCG画像である場合、3Dモデルの情報を利用して、表示対象についての奥行き方向の情報(デプスMAP)を作成できる。奥行き方向の情報(デプスMAP)に基づいて表示対象を抽出したり、表示装置10に物体を表示する方向を設定したりすることができる。
Subsequently, a case of a computer graphics image (CG image) will be described as an example of a base image.
(3: For CG images)
When the base image is a CG image, information in the depth direction (depth MAP) about the display target can be created using the information of the 3D model. A display target can be extracted based on the depth direction information (depth MAP), or the direction in which an object is displayed on the display device 10 can be set.
 図24を参照して、遠近法により表示した表示対象を擬似的に回転して表示させる処理について説明する。
 図24は、遠近法により表示した表示対象を擬似的に回転して表示させる処理を説明する図である。
 図24(a)、(b)に、直方体を遠近法(二点透視図法)で示す。
 図24(a)には、FP1とFP2を消失点として定めた場合の直方体、即ち、図示される各頂点(QA,QB,QC,QD,QE,QF,QF,QG)を有する直方体が示されている(但し、本体に隠れた頂点を含まず)。
With reference to FIG. 24, the process of displaying the display object displayed by the perspective method in a pseudo-rotation will be described.
FIG. 24 is a diagram illustrating a process of displaying a display object displayed by perspective in a pseudo-rotation manner.
FIGS. 24A and 24B show the rectangular parallelepiped in perspective (two-point perspective).
FIG. 24A shows a rectangular parallelepiped when FP1 and FP2 are defined as vanishing points, that is, a rectangular parallelepiped having the vertices (QA, QB, QC, QD, QE, QF, QF, QG) illustrated. (However, vertices hidden in the body are not included.)
 例えば、図24(a)に示される直方体を、現在の視点よりQA-QB-QF-QGからなる面の正面側に寄った位置から見た画像を仮想的に生成する場合を説明する。上記のように視点の位置を移動させた場合、図24(b)のような直方体を視認できる場所がある。
 図24(a)から図24(b)に示す図への変換処理では、あたかも図24(a)に示される回転軸RAを中心に、直方体を含む座標系ごと回転させたものとして得ることができる。要するに、立体画像生成部220Cは、遠近法により表示する直方体(表示対象)を、上記の所定の位置に応じて回転軸RA(仮想の軸)を基準に回転させた像として、図24(b)に示す直方体を表示させるための画像情報D11P、画像情報D12Pを生成する。
 このような変換処理により、様々な方向から視認した直方体の像を演算処理によって得ることができる。本実施形態において、観察者1の位置の移動を検出し、表示装置10に表示する直方体に対する視認する方向を検出した移動量に応じて連動させて回転させてもよい。このように、観察者1の移動量(動き)と連動させることにより、特別な入力手段を用いることなく、表示する画像を回転させることができる。
For example, a case will be described in which an image of the rectangular parallelepiped shown in FIG. 24A is virtually generated when viewed from a position closer to the front side of the surface composed of QA-QB-QF-QG than the current viewpoint. When the viewpoint position is moved as described above, there is a place where a rectangular parallelepiped can be visually recognized as shown in FIG.
In the conversion process from FIG. 24A to the diagram shown in FIG. 24B, it is obtained as if the coordinate system including the rectangular parallelepiped is rotated about the rotation axis RA shown in FIG. it can. In short, the stereoscopic image generation unit 220C uses an image obtained by rotating a rectangular parallelepiped (display target) displayed by the perspective method with reference to the rotation axis RA (virtual axis) according to the predetermined position, as shown in FIG. The image information D11P and the image information D12P for displaying the rectangular parallelepiped shown in FIG.
By such conversion processing, an image of a rectangular parallelepiped viewed from various directions can be obtained by calculation processing. In the present embodiment, the movement of the position of the observer 1 may be detected and rotated in conjunction with the detected amount of movement with respect to the rectangular parallelepiped displayed on the display device 10. In this way, by interlocking with the movement amount (movement) of the observer 1, the image to be displayed can be rotated without using any special input means.
 また、上記のように、観察者1の移動量(動き)と連動する回転に合わせて、表示する直方体の形状を変形する。任意の方向から視認した画像を立体表示させるために、立体画像生成部220Cは、入力される画像情報と、表示条件に応じて表示する直方体の輪郭部の位置を設定した輪郭画像を生成する。これにより、表示条件が変更された場合であっても、立体画像を表示するまでの応答性を短縮することができる。 In addition, as described above, the shape of the rectangular parallelepiped to be displayed is deformed in accordance with the rotation linked to the movement amount (movement) of the observer 1. In order to stereoscopically display an image viewed from an arbitrary direction, the stereoscopic image generation unit 220C generates a contour image in which input image information and the position of a rectangular parallelepiped contour portion to be displayed are set according to display conditions. Thereby, even when the display condition is changed, the responsiveness until the stereoscopic image is displayed can be shortened.
 以上のように、表示システム100Cにおいては、静止画、動画像、或いは、CG画像の各種画像を表示対象とすることができ、各種画像の特徴に応じて画像の中から強調して表示する対象を主要被写体として特定することにより、生成される立体画像情報における立体表現力を高めることができる。 As described above, in the display system 100C, various images such as a still image, a moving image, or a CG image can be displayed, and an object to be displayed in an emphasized manner according to the characteristics of the various images. By specifying as the main subject, it is possible to enhance the stereoscopic expression power in the generated stereoscopic image information.
 また、画像処理装置2Cにおける輪郭補正部210は、立体画像生成部220Cによって生成された画像情報D11P,画像情報D12P(立体画像情報)に基づいて輪郭画像を補正する。輪郭補正部210は、ユーザの代表位置を基準に、輪郭画像を配置する位置・方向、輝度バランスを補正する。また、輪郭補正部210は、計算で求まる輪郭画像の位置の移動量に応じて、輪郭部の輝度を設定する。輪郭補正部210が行う補正処理については、前述の各実施形態に示す方法を適用することができる。 Also, the contour correcting unit 210 in the image processing apparatus 2C corrects the contour image based on the image information D11P and image information D12P (stereoscopic image information) generated by the stereoscopic image generating unit 220C. The contour correction unit 210 corrects the position / direction in which the contour image is arranged and the brightness balance based on the representative position of the user. The contour correcting unit 210 sets the brightness of the contour portion according to the amount of movement of the position of the contour image obtained by calculation. For the correction processing performed by the contour correction unit 210, the methods described in the above embodiments can be applied.
 続いて、図25を参照して、表示システム100Cにおける画像処理装置2Cが行う処理を説明する。図25は、画像処理装置2Cが行う処理を示すフローチャートである。 Subsequently, processing performed by the image processing apparatus 2C in the display system 100C will be described with reference to FIG. FIG. 25 is a flowchart illustrating processing performed by the image processing apparatus 2C.
 まず、観察者1を撮像範囲に含むように撮像部230が撮像して得られた画像情報に基づいて、検出部250が観察者1の位置を検出する(ステップS10)。この観察者1の位置の検出では、撮像部230によって撮像された画像情報に基づいて検出部250が実施する。 First, the detection unit 250 detects the position of the observer 1 based on image information obtained by the imaging unit 230 imaging so that the observer 1 is included in the imaging range (step S10). In the detection of the position of the observer 1, the detection unit 250 performs based on image information captured by the imaging unit 230.
 立体画像生成部220Cは、検出された観察者1の位置に基づいて観察者1から立体視できる立体画像の輪郭画像を生成する(ステップS20)。観察者1の位置は、ステップS10において検出部250よって検出された位置である。この検出された観察者1の位置から立体画像が視認できるように、立体画像生成部220Cは、重ねた画像として視認される画像を示す少なくとも1つの画像情報に輪郭画像を生成する。例えば、立体画像生成部220Cは、少なくとも画像情報(輪郭画像)D11Pを生成する。 The stereoscopic image generation unit 220C generates a contour image of a stereoscopic image that can be stereoscopically viewed from the observer 1 based on the detected position of the observer 1 (step S20). The position of the observer 1 is the position detected by the detection unit 250 in step S10. The stereoscopic image generation unit 220C generates a contour image in at least one image information indicating an image that is visually recognized as a superimposed image so that the stereoscopic image can be visually recognized from the detected position of the observer 1. For example, the stereoscopic image generation unit 220C generates at least image information (contour image) D11P.
 輪郭補正部210は、生成された輪郭画像に対し、検出された観察者1の位置に基づいて輪郭部の補正を行う(ステップS30)。例えば、生成された輪郭画像は、少なくとも輪郭画像D11Pである。輪郭補正部210は、少なくとも輪郭画像D11Pに対して輪郭部の補正を行って、画像情報D11を生成する。 The contour correction unit 210 corrects the contour portion of the generated contour image based on the detected position of the observer 1 (step S30). For example, the generated contour image is at least the contour image D11P. The contour correction unit 210 corrects at least the contour portion of the contour image D11P to generate image information D11.
 制御部260は、補正した輪郭画像を表示装置10における表示部11と表示部12とに表示させる(ステップS40)。例えば、制御部260は、補正した輪郭画像を含む画像情報D11を表示装置10における表示部11に表示させる。 The control unit 260 displays the corrected contour image on the display unit 11 and the display unit 12 in the display device 10 (step S40). For example, the control unit 260 causes the display unit 11 in the display device 10 to display the image information D11 including the corrected contour image.
 以上に示す処理により、画像処理装置2Cは、表示装置10に表示する画像情報の輪郭を補正することができる。これにより、観察者1が立体画像を視認できる所定の位置から移動した場合であっても、立体画像の視認性を向上できる。 By the processing described above, the image processing device 2C can correct the contour of the image information displayed on the display device 10. Thereby, even if it is a case where the observer 1 moves from the predetermined | prescribed position which can visually recognize a stereo image, the visibility of a stereo image can be improved.
 なお、第1実施形態に示した一実施例と各変形例における輪郭画像D11Pとして、輪郭のみを示す輪郭画像を例示して説明したが、輪郭画像D11Pの生成の基とする原画像の情報と、前述の輪郭画像D11Pの画像情報とに基づいて合成した新たな画像情報を輪郭画像としてもよい。要するに、上記のように合成して新たに得られる画像情報は、輪郭が強調された原画像に相当する。
 以上に示すように、本実施形態に示す表示システム100Cによれば、観察者1が立体画像を視認できる所定の位置から移動した場合であっても、立体画像の視認性を向上できる。
In addition, although the outline image which shows only an outline was illustrated and demonstrated as one Example shown in 1st Embodiment, and the outline image D11P in each modification, the information of the original image used as the basis of the production | generation of the outline image D11P, and The new image information synthesized based on the image information of the above-described contour image D11P may be used as the contour image. In short, the image information newly obtained by synthesizing as described above corresponds to the original image in which the contour is emphasized.
As described above, according to the display system 100C shown in the present embodiment, the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
 以上、本実施形態に示す表示システム100(100A,100B,100C)は、移動する観察者1(ユーザ)の位置を検出して、観察者1(ユーザ)の位置に応じた立体画像を表示装置10(10A,10B)に表示する。表示装置10(10A,10B)に表示される立体画像において、左眼と右眼とからそれぞれ観察される画像におけるエッジ部を補正して視差が生じるように補正している。これにより、観察者1は、観察者1が現在いる位置から立体視できるようになる。
 なお、本実施形態の説明において、表示システム100(100A,100B,100C)の表示装置10(10A,10B)が表示する表示面側に存在する人(表示面側に存在する対象)を、観察者1と称している。この観察者1とは、例えば、表示装置10が表示する表示面を見ている人(視認する人)、見ようとしている人(視認しようとしている人)、見ることが可能な人(視認可能な人)、又は単に表示面側に存在する人、等を含む。また、立体画像を立体視可能な視認位置は、例えば立体画像を表示している表示面に対する距離と角度とによって定まる立体視可能な視認領域内の位置である。本実施形態の説明において、この「視認領域」又は「視認領域」内の「視認位置」のことを、「視認位置」と記述する。
As described above, the display system 100 (100A, 100B, 100C) shown in the present embodiment detects the position of the moving observer 1 (user) and displays a stereoscopic image corresponding to the position of the observer 1 (user). 10 (10A, 10B). In the stereoscopic image displayed on the display device 10 (10A, 10B), the edge portions in the images observed from the left eye and the right eye are corrected so as to generate parallax. As a result, the observer 1 can stereoscopically view from the position where the observer 1 is present.
In the description of the present embodiment, a person (object existing on the display surface side) existing on the display surface side displayed by the display device 10 (10A, 10B) of the display system 100 (100A, 100B, 100C) is observed. It is called person 1. The observer 1 is, for example, a person who is looking at the display surface displayed by the display device 10 (a person who sees it), a person who is trying to see (a person who is trying to see), or a person who can see (a person who can see) Person), or a person who is simply present on the display surface side. The viewing position at which the stereoscopic image can be viewed stereoscopically is a position within the viewing area where the stereoscopic viewing is possible, which is determined by, for example, the distance and angle with respect to the display surface displaying the stereoscopic image. In the description of the present embodiment, this “viewing area” or “viewing position” in the “viewing area” is described as “viewing position”.
 なお、上述の第1から第4実施形態の要件は、発明の技術的な特徴に影響のない範囲で適宜変更することができる。また、一部の構成要素を用いない場合もある。 Note that the requirements of the first to fourth embodiments described above can be changed as appropriate within a range that does not affect the technical features of the invention. Some components may not be used.
 例えば、上記の各実施形態において例示した立体画像の輪郭は、表示装置10の鉛直方向(Y軸(図1))を例示したが、表示装置10の水平方向や、斜めの方向の輪郭についても適用することができる。表示装置10の水平方向や、斜めの方向の輪郭に対して輪郭画像を生成する際には、対象とする輪郭の延伸方向に直交する方向に輪郭画像を補正する補正量を調整するとよい。 For example, the contour of the stereoscopic image exemplified in each of the above embodiments is exemplified by the vertical direction (Y-axis (FIG. 1)) of the display device 10, but the contour of the display device 10 in the horizontal direction and the oblique direction is also illustrated. Can be applied. When generating a contour image with respect to the horizontal or oblique contour of the display device 10, the correction amount for correcting the contour image in a direction orthogonal to the extending direction of the target contour may be adjusted.
 例えば、上記の各実施形態において例示した輪郭画像の補正を行う画素の位置は、補正する前の輪郭の位置の隣の画素とする場合を例示して説明したが、補正する前の輪郭の位置から所定の画素数を隔てた位置としてもよい。例えば、複数の画素を連ねて輪郭を示す場合には、輪郭の幅に応じた所定の画素数を隔てた位置にしてもよい。 For example, the position of the pixel that performs the correction of the contour image exemplified in each of the above embodiments has been described as an example of the pixel adjacent to the position of the contour before correction. However, the position of the contour before correction is described. Alternatively, the position may be a predetermined number of pixels from the position. For example, when a contour is shown by connecting a plurality of pixels, a predetermined number of pixels corresponding to the width of the contour may be separated.
 例えば、輪郭画像を表示する表示部は、表示する画像が重なるように配される表示部のうち、奥行き方向((-Z)軸方向)にある表示部11に表示する例を示したが、表示部11の表示面11Sの前方に配されている表示部12であってもよい。また、表示する画像が重なるように配される表示のうち、少なくとも何れか1つの表示部に輪郭画像を表示するものとすることができる。さらに、表示する画像が重なるように配される表示のうち、複数の表示部に輪郭画像を表示するものとすることができる。また、複数の表示部にそれぞれ表示される画像を組み合わせた結果が輪郭画像となる画像情報を表示することを制限するものではない。 For example, the display unit that displays the contour image shows an example of display on the display unit 11 in the depth direction ((−Z) axis direction) among the display units arranged so that the displayed images overlap. The display unit 12 disposed in front of the display surface 11S of the display unit 11 may be used. Further, the contour image can be displayed on at least one of the displays arranged so that the images to be displayed overlap. Further, among the displays arranged so that the images to be displayed overlap, the contour image can be displayed on a plurality of display units. Moreover, it does not restrict | limit displaying the image information from which the result which combined the image each displayed on a some display part becomes an outline image.
 例えば、表示システム100は、表示装置10にかえて、頭部装着型の表示装置、いわゆるヘッドマウントディスプレイ(HMD)に表示する画像情報を供給してもよい。或いは、表示システム100は、表示装置10にかえて、レンズシャッター型眼鏡を利用する立体画像表示装置に表示する画像情報を供給してもよい。 For example, the display system 100 may supply image information to be displayed on a head-mounted display device, a so-called head mounted display (HMD), instead of the display device 10. Alternatively, the display system 100 may supply image information to be displayed on a stereoscopic image display device using lens shutter glasses instead of the display device 10.
 [第5の実施形態]
 以下、図面を参照して、本発明の第5の実施形態を説明する。
 図27は、本発明の第5の実施形態における表示装置2100の構成の概要の一例を示す概要図である。以下、各図の説明においてはXYZ直交座標系を設定し、このXYZ直交座標系を参照しつつ各部の位置関係について説明する。表示装置2100が画像を表示している方向をZ軸の正の方向とし、このZ軸方向に垂直な平面上の直交方向をそれぞれX軸方向及びY軸方向とする。ここでX軸方向は、表示装置2100の水平方向とし、Y軸方向は表示装置2100の鉛直上方向とする。以下、この表示装置2100の構成の概要について説明する。
[Fifth Embodiment]
Hereinafter, a fifth embodiment of the present invention will be described with reference to the drawings.
FIG. 27 is a schematic diagram illustrating an example of a schematic configuration of a display device 2100 according to the fifth embodiment of the present invention. Hereinafter, in the description of each drawing, an XYZ rectangular coordinate system is set, and the positional relationship of each part will be described with reference to this XYZ rectangular coordinate system. A direction in which the display device 2100 displays an image is a positive direction of the Z axis, and orthogonal directions on a plane perpendicular to the Z axis direction are an X axis direction and a Y axis direction, respectively. Here, the X-axis direction is the horizontal direction of the display device 2100, and the Y-axis direction is the vertical upward direction of the display device 2100. Hereinafter, an outline of a configuration of the display device 2100 will be described.
 本実施形態の表示装置2100は、表示部2010としての第1表示部2011と、第2表示部2012とを備えている。第1表示部2011は、奥行き位置Zに画像を表示する第1表示面2011Sを備えている。また、第2表示部2012は、奥行き位置Zに画像を表示する第2表示面2012Sを備えている。ユーザ1(観察者)は、奥行き位置ZVPに予め定められている視点位置VPから、第1表示面2011Sと、第2表示面2012Sとを見る。第2表示面2012Sは、透過型のスクリーンであるため、ユーザ1が視点位置VPから第2表示面2012S見た場合に、第1表示面2011Sが表示する画像と、第2表示面2012Sが表示する画像とが重なるように見える。 A display device 2100 according to this embodiment includes a first display unit 2011 as a display unit 2010 and a second display unit 2012. The first display unit 2011 includes a first display surface 2011S for displaying an image in the depth position Z 1. The second display unit 2012 is provided with a second display surface 2012S for displaying an image in the depth position Z 2. User 1 (viewer) sees from the viewpoint position VP predetermined for depth position Z VP, the first display surface 2011S, and a second display surface 2012S. Since the second display surface 2012S is a transmissive screen, when the user 1 views the second display surface 2012S from the viewpoint position VP, the image displayed on the first display surface 2011S and the second display surface 2012S are displayed. It looks like the image to overlap.
 ここで、一例として、表示対象OBJが、図27に示す立方体である場合を説明する。第1表示面2011Sは、立方体の画像P11を表示する。第2表示面2012Sは、立方体の画像P12を表示する。ここで、第2表示面2012Sは、ユーザ1が視点位置VPから見た場合に、その画像P12が示す立方体の稜線が、画像P11が示す立方体の各稜線と重なって見えるように、予め表示される大きさや位置が設定されている。 Here, as an example, a case where the display target OBJ is a cube shown in FIG. 27 will be described. The first display surface 2011S displays a cubic image P11. The second display surface 2012S displays a cubic image P12. Here, the second display surface 2012S is displayed in advance so that when the user 1 is viewed from the viewpoint position VP, the ridge lines of the cube indicated by the image P12 appear to overlap the ridge lines of the cube indicated by the image P11. The size and position are set.
 なお、第2表示面2012Sが表示する画像P12は、表示対象OBJをそのまま示す画像であってもよく、また表示対象OBJの輪郭部分(エッジ部分)を強調して示す画像であってもよい。例えば、第2表示面2012Sが表示する画像P12は、表示対象OBJの輪郭部を示す輪郭画像であってもよい。この輪郭画像は、例えば、微分フィルタなどにより、第1表示面2011Sが表示する画像P11の輪郭部分を抽出して生成される。また、この輪郭画像とは、輪郭部分を表す画素の幅が1画素である画像であってもよく、輪郭部分を表す画素の幅が複数画素である画像であってもよい。以下においては、画像P12が、輪郭画像である場合について説明する。すなわち、第2表示面2012Sが輪郭画像P12を表示する場合について説明する。 Note that the image P12 displayed on the second display surface 2012S may be an image showing the display target OBJ as it is, or an image showing the outline (edge part) of the display target OBJ in an emphasized manner. For example, the image P12 displayed on the second display surface 2012S may be a contour image showing a contour portion of the display target OBJ. This contour image is generated by extracting the contour portion of the image P11 displayed on the first display surface 2011S by using, for example, a differential filter. In addition, the contour image may be an image in which the width of the pixel representing the contour portion is one pixel, or may be an image in which the width of the pixel representing the contour portion is a plurality of pixels. Hereinafter, a case where the image P12 is a contour image will be described. That is, a case where the second display surface 2012S displays the contour image P12 will be described.
 ユーザ1が、視点位置VPから画像P11と輪郭画像P12とを重ねて見ると、ユーザ1の左眼Lと右眼Rとの間において、立方体の各稜線の重なりかたに差が生じる。すなわち、ユーザ1が、視点位置VPから奥行き位置が相違する画像P11と輪郭画像P12とを重ねて見ると、両眼視差が生じる。この両眼視差によって、ユーザ1は、立方体の立体像を知覚する。このユーザ1が立体像を知覚する仕組みの詳細については後述する。なお、ここでは第2表示面2012Sが、透過型スクリーンであって、視点位置VPから見て第1表示面2011Sよりも奥行き位置の手前側(+Z側)に位置するとして説明するが、これに限られない。例えば、第1表示面2011Sが、透過型スクリーンであって、視点位置VPから見て第2表示面2012Sよりも奥行き位置の手前側(+Z側)に位置していてもよい。 When the user 1 superimposes the image P11 and the contour image P12 from the viewpoint position VP, a difference occurs in how the ridge lines of the cube overlap between the left eye L and the right eye R of the user 1. That is, when the user 1 superimposes the image P11 and the contour image P12 whose depth positions are different from the viewpoint position VP, binocular parallax occurs. Due to this binocular parallax, the user 1 perceives a cubic stereoscopic image. Details of the mechanism by which the user 1 perceives a stereoscopic image will be described later. Here, it is assumed that the second display surface 2012S is a transmissive screen and is located on the near side (+ Z side) of the depth position with respect to the first display surface 2011S when viewed from the viewpoint position VP. Not limited. For example, the first display surface 2011S may be a transmissive screen and may be located closer to the depth position than the second display surface 2012S (+ Z side) when viewed from the viewpoint position VP.
 ここで、第2表示面2012Sが表示する立方体の稜線を、第1表示面2011Sが表示する立方体の稜線に対して精密に位置合わせすることができれば、表示装置2100は、ユーザ1に精度のよい立体像を知覚させることができる。例えば、視点位置VPからみた第1表示面2011Sが表示する立方体の稜線の位置と、第2表示面2012Sが表示する立方体の稜線の位置とが精密に位置合わせされていれば、表示装置2100は、ユーザ1に精度のよい立体像を知覚させることができる。ここで、第1表示面2011S、および第2表示面2012Sは、例えば、液晶ディスプレイや液晶プロジェクタのスクリーンなどであり、2次元に配列された画素を有している。第2表示面2012Sは、視点位置VPからみた第1表示面2011Sが表示する立方体の稜線の位置Pt1に対応する位置Pt2を含む画素に、立方体の稜線を表示する。ここで、画素は、第2表示面2012Sの精細度に応じた面積を有している。この位置Pt2と、画素の中心の位置Pt3とが一致している場合には、視点位置VPからみた第1表示面2011Sが表示する立方体の稜線の位置Pt1と、第2表示面2012Sが表示する立方体の稜線の位置とが精密に位置合わせされていることになる。しかしながら、一般的には位置Pt2と、画素の中心の位置Pt3とは一致しないため、この場合には、位置合わせの精度が低下してしまう。したがって、位置Pt2と、画素の中心の位置Pt3とは一致していない場合に第2表示面2012Sが輪郭画像P12を補正せずにそのまま表示してしまうと、位置合わせの精度が低下してしまう。この場合には、表示装置2100は、ユーザ1が知覚する立体像の精度(例えば、立体像の奥行き感)が低下してしまうことがある。この画像どうしの位置合わせの精度について、図28、図29を参照して説明する。 Here, if the ridge line of the cube displayed on the second display surface 2012S can be precisely aligned with the ridge line of the cube displayed on the first display surface 2011S, the display device 2100 is highly accurate for the user 1. A stereoscopic image can be perceived. For example, if the position of the ridgeline of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP and the position of the ridgeline of the cube displayed on the second display surface 2012S are precisely aligned, the display device 2100 The user 1 can perceive a highly accurate stereoscopic image. Here, the first display surface 2011 </ b> S and the second display surface 2012 </ b> S are, for example, a liquid crystal display or a screen of a liquid crystal projector, and have two-dimensionally arranged pixels. The second display surface 2012S displays the ridge line of the cube on the pixel including the position Pt2 corresponding to the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP. Here, the pixel has an area corresponding to the definition of the second display surface 2012S. When the position Pt2 and the center position Pt3 of the pixel coincide with each other, the second display surface 2012S displays the position Pt1 of the cube ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP. The position of the ridge line of the cube is precisely aligned. However, in general, the position Pt2 and the center position Pt3 of the pixel do not coincide with each other. In this case, the alignment accuracy is lowered. Accordingly, if the position Pt2 and the center position Pt3 of the pixel do not coincide with each other and the second display surface 2012S displays the contour image P12 without correction, the alignment accuracy decreases. . In this case, the display device 2100 may reduce the accuracy of the stereoscopic image perceived by the user 1 (for example, the sense of depth of the stereoscopic image). The accuracy of alignment between images will be described with reference to FIGS. 28 and 29. FIG.
 図28は、本実施形態の第2表示面2012Sの画素の構成の一例を示す模式図である。第2表示面2012Sは、XY平面上に2次元に配列された複数の画素を有している。この画素のうち一部の画素は、表示対象OBJである立方体の稜線を示している。この立方体の稜線を示す複数の画素Px(画素Px11~Px33)について、図29を参照して説明する。 FIG. 28 is a schematic diagram illustrating an example of a pixel configuration of the second display surface 2012S of the present embodiment. The second display surface 2012S has a plurality of pixels arranged two-dimensionally on the XY plane. Some of the pixels indicate the ridges of a cube that is the display target OBJ. A plurality of pixels Px (pixels Px11 to Px33) indicating the ridgelines of the cube will be described with reference to FIG.
 図29は、本実施形態の第2表示面2012Sの画素Pxの一例を示す模式図である。立方体の稜線を示す複数の画素Pxとは、一例として、中心の画素Px22と、その周囲の画素Px11~Px33との9つの画素である。中心の画素Px22とは、視点位置VPからみた第1表示面2011Sが表示する立方体の稜線の位置Pt1に対応する位置Pt2を含む画素である。また、上述した位置Pt3とは、ここでは画素Px22の中心の位置である。ここでは、位置Pt2が、位置Pt3に対して(+Y)方向に距離ΔPtだけずれている場合について説明する。 FIG. 29 is a schematic diagram illustrating an example of a pixel Px on the second display surface 2012S of the present embodiment. As an example, the plurality of pixels Px indicating the cubic ridge lines are nine pixels including a central pixel Px22 and surrounding pixels Px11 to Px33. The central pixel Px22 is a pixel including a position Pt2 corresponding to the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP. The position Pt3 described above is the center position of the pixel Px22 here. Here, a case will be described in which the position Pt2 is shifted by a distance ΔPt in the (+ Y) direction with respect to the position Pt3.
 上述したように、位置Pt2と、画素の中心の位置Pt3とが一致している場合には、視点位置VPからみた第1表示面2011Sが表示する立方体の稜線の位置Pt1と、第2表示面2012Sが表示する立方体の稜線の位置とが精密に位置合わせされていることになる。
 しかしながら、位置Pt2が、画素の中心の位置Pt3から(+Y)方向に距離ΔPtだけずれている場合には、立方体の稜線を表示すべき本来の位置よりも(-Y)方向にずれて稜線が表示されることになる。
As described above, when the position Pt2 and the center position Pt3 of the pixel coincide with each other, the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP and the second display surface The position of the ridgeline of the cube displayed by 2012S is precisely aligned.
However, when the position Pt2 is shifted from the center position Pt3 of the pixel by the distance ΔPt in the (+ Y) direction, the ridge line is shifted in the (−Y) direction from the original position where the cube ridge line is to be displayed. Will be displayed.
 本実施形態の表示装置2100は、輪郭画像P12を補正する輪郭補正部2013を備える。
 輪郭補正部2013は、この位置Pt2と位置Pt3とのずれの方向に基づいて、輪郭画像P12を補正する。表示装置2100は、輪郭補正部2013によって、ユーザ1が知覚する立体像の精度を向上させる。以下、輪郭補正部2013を備える表示装置2100の具体的な構成について説明する。
The display device 2100 of this embodiment includes a contour correction unit 2013 that corrects the contour image P12.
The contour correcting unit 2013 corrects the contour image P12 based on the direction of deviation between the position Pt2 and the position Pt3. The display device 2100 uses the contour correction unit 2013 to improve the accuracy of the stereoscopic image perceived by the user 1. Hereinafter, a specific configuration of the display device 2100 including the contour correction unit 2013 will be described.
 図30は、本実施形態の表示装置2100の具体的な構成の一例を示す模式図である。表示装置2100は、上述したように、第1表示部2011と、第2表示部2012と、輪郭補正部2013を備えている。第1表示部2011は、第1表示面2011Sを備えている。第1表示部2011は、第1表示面2011Sに画像P11を表示することにより、光R11をユーザ1に対して射出する。この光R11は、透過型スクリーンである第2表示面2012Sを透過して、ユーザ1に到達する。第1表示面2011Sは、画像供給装置2002から供給される画像情報D11を画像P11に変換して表示する。画像情報D11とは、例えば、表示対象OBJである立方体を示す画像情報である。第2表示部2012は、第2表示面2012Sを備えている。第2表示部2012は、第2表示面2012Sに輪郭画像P12を表示することにより、光R12をユーザ1に対して射出する。第2表示面2012Sは、画像供給装置2002から供給される画像情報D12を、輪郭補正部2013が補正した補正画像情報D12Cを輪郭画像P12に変換して表示する。画像情報D12とは、例えば、表示対象OBJである立方体の輪郭部分を示す輪郭画像の画像情報である。また、補正画像情報D12Cとは、画像情報D12を、輪郭補正部2013が補正した補正輪郭画像の画像情報である。 FIG. 30 is a schematic diagram illustrating an example of a specific configuration of the display device 2100 of the present embodiment. As described above, the display device 2100 includes the first display unit 2011, the second display unit 2012, and the contour correction unit 2013. The first display unit 2011 includes a first display surface 2011S. The first display unit 2011 emits the light R11 to the user 1 by displaying the image P11 on the first display surface 2011S. The light R11 passes through the second display surface 2012S, which is a transmissive screen, and reaches the user 1. The first display surface 2011S converts the image information D11 supplied from the image supply device 2002 into an image P11 and displays it. The image information D11 is, for example, image information indicating a cube that is the display target OBJ. The second display unit 2012 includes a second display surface 2012S. The second display unit 2012 emits the light R12 to the user 1 by displaying the contour image P12 on the second display surface 2012S. The second display surface 2012S displays the image information D12 supplied from the image supply device 2002 after converting the corrected image information D12C corrected by the contour correcting unit 2013 into a contour image P12. The image information D12 is, for example, image information of a contour image indicating a contour portion of a cube that is the display target OBJ. The corrected image information D12C is image information of a corrected contour image obtained by correcting the image information D12 by the contour correcting unit 2013.
 このようにして、表示装置2100は、輪郭画像P12と、その表示対象OBJの輪郭部分を示す輪郭画像とを重ねて表示する。これにより、ユーザ1が観察する表示対象OBJに立体感を与えることができる。つまり、ユーザ1は、輪郭画像P12と、その表示対象OBJの輪郭部分を示す輪郭画像とを重ねて見た場合、表示対象OBJがZ軸方向に飛び出すような立体像を知覚する。表示装置2100が表示する画像によってユーザ1に立体感を与える仕組みについて、図31から図37Dを参照して説明する。 In this way, the display device 2100 displays the contour image P12 and the contour image indicating the contour portion of the display object OBJ in an overlapping manner. Thereby, a stereoscopic effect can be given to the display object OBJ observed by the user 1. That is, when the user 1 sees the contour image P12 and the contour image indicating the contour portion of the display target OBJ in an overlapping manner, the user 1 perceives a stereoscopic image in which the display target OBJ jumps out in the Z-axis direction. A mechanism for giving a stereoscopic effect to the user 1 using an image displayed on the display device 2100 will be described with reference to FIGS. 31 to 37D.
 図31は、本実施形態の表示対象OBJの画像P11の一例を示す模式図である。ここでは、表示対象OBJが、一例として、四角形のパターンである場合について、以下説明する。この四角形のパターンとは、等しい長さの4辺が各頂点においてそれぞれ直角に交わる正方形のパターンである。この四角形のパターンは、四角形のパターンの外部と内部とを分ける輪郭線としての4辺を有している。この四角形のパターンを見る観察者は、四角形のパターンの外部と内部との明るさの差が大きい場合に、この輪郭線を、四角形のパターンのエッジ部分Eであると知覚する。つまり、エッジ部分Eとは、表示対象OBJのうち、周囲の明るさの差が他の部分の明るさの差に比して相対的に大きい部分である。四角形のパターンの場合、各辺がエッジ部分Eになりうるが、ここでは、各エッジ部分Eのうち、四角形のパターンのY軸方向に平行な2辺を、それぞれエッジ部分E1およびエッジ部分E2として説明する。 FIG. 31 is a schematic diagram illustrating an example of an image P11 of the display target OBJ according to the present embodiment. Here, the case where the display target OBJ is a square pattern as an example will be described below. This square pattern is a square pattern in which four sides of equal length intersect at right angles at each vertex. This rectangular pattern has four sides as contour lines that separate the outside and the inside of the rectangular pattern. An observer who sees the square pattern perceives the contour line as the edge portion E of the square pattern when the difference in brightness between the outside and the inside of the square pattern is large. That is, the edge portion E is a portion of the display object OBJ in which the difference in ambient brightness is relatively larger than the difference in brightness in other portions. In the case of a square pattern, each side can be an edge portion E, but here, of each edge portion E, two sides parallel to the Y-axis direction of the square pattern are defined as an edge portion E1 and an edge portion E2, respectively. explain.
 次に、図32を参照して、この四角形のパターンに対応する輪郭画像P12の一例について説明する。
 図32は、本実施形態の輪郭画像P12の一例を示す模式図である。上述したように表示対象OBJが四角形のパターンである場合、輪郭画像P12とは、四角形のパターンのエッジ部分E1を示すエッジ画像PE1、およびエッジ部分E2を示すエッジ画像PE2を含む画像である。つまり、第2表示面2012Sは、表示対象OBJが上述したように四角形のパターンである場合、そのエッジ部分E1およびエッジ部分E2にそれぞれ対応するエッジ画像PE1およびエッジ画像PE2を含む輪郭画像P12を表示する。
Next, an example of the contour image P12 corresponding to this square pattern will be described with reference to FIG.
FIG. 32 is a schematic diagram illustrating an example of the contour image P12 of the present embodiment. As described above, when the display target OBJ is a square pattern, the contour image P12 is an image including the edge image PE1 indicating the edge portion E1 of the square pattern and the edge image PE2 indicating the edge portion E2. That is, when the display target OBJ is a quadrangular pattern as described above, the second display surface 2012S displays the contour image P12 including the edge image PE1 and the edge image PE2 corresponding to the edge portion E1 and the edge portion E2, respectively. To do.
 次に、図33を参照して、表示対象OBJとしての四角形のパターンの位置と、エッジ画像PE1およびエッジ画像PE2を含む輪郭画像P12の位置と、観察者の視点位置VPとの位置の関係について説明する。
 図33は、本実施形態における画像P11と輪郭画像P12と視点位置VPとの位置の関係の一例を示す模式図である。観察者は、位置ZVPにある視点位置VPから、位置Zにある輪郭画像P12と、位置Zにある画像P11とを重ねて見る。輪郭画像P12とは、上述したように、表示対象OBJとしての四角形のパターンのエッジ部分E1およびエッジ部分E2に対応するエッジ画像PE1およびエッジ画像PE2を含む画像である。ここで、四角形のパターンのエッジ部分E1のX方向の位置は、位置X2であり、エッジ部分E2のX方向の位置は、位置X5である。
Next, with reference to FIG. 33, the relationship between the position of the square pattern as the display object OBJ, the position of the contour image P12 including the edge image PE1 and the edge image PE2, and the position of the observer's viewpoint position VP. explain.
FIG. 33 is a schematic diagram illustrating an example of a positional relationship among the image P11, the contour image P12, and the viewpoint position VP in the present embodiment. Observer, the viewpoint position VP at the position Z VP, seen superimposed a contour image P12 at the position Z 2, and an image P11 in the position Z 1. As described above, the contour image P12 is an image including the edge image PE1 and the edge image PE2 corresponding to the edge portion E1 and the edge portion E2 of the square pattern as the display object OBJ. Here, the position in the X direction of the edge portion E1 of the square pattern is the position X2, and the position in the X direction of the edge portion E2 is the position X5.
 上述したように、第2表示面2012Sは、位置X2にあるエッジ部分E1と、輪郭画像P12のエッジ画像PE1とが視点位置VPにおいて重なって見えるように、輪郭画像P12を表示する。同様に第2表示面2012Sは、位置X5にあるエッジ部分E2と、輪郭画像P12のエッジ画像PE2とが視点位置VPにおいて重なって見えるように、輪郭画像P12を表示する。このように表示される輪郭画像P12と、表示対象OBJ(例えば、四角形のパターン)を示す画像P11とを、観察者が重ねて見ると、観察者の網膜像上では認識できないくらいの微小な輝度の段差ができる。このような場合においては、明るさ(例えば、輝度)の段差間に仮想的な輪郭(エッジ)を知覚して、輪郭画像P12と画像P11とを1つの画像として知覚する。このとき、左眼Lが見る光学像IMLと、右眼Rが見る光学像IMRとには両眼視差が生ずるため、この仮想的な輪郭にも少しだけずれが生じて両眼視差として知覚して、画像P11の見かけの奥行き位置が変化する。以下、この左眼Lが見る光学像IML、右眼Rが見る光学像IMRの順に説明し、あわせて画像P11の見かけの奥行き位置が変化する仕組みについて説明する。 As described above, the second display surface 2012S displays the contour image P12 so that the edge portion E1 at the position X2 and the edge image PE1 of the contour image P12 appear to overlap at the viewpoint position VP. Similarly, the second display surface 2012S displays the contour image P12 so that the edge portion E2 at the position X5 and the edge image PE2 of the contour image P12 appear to overlap at the viewpoint position VP. When the observer superimposes the contour image P12 displayed in this way and the image P11 showing the display object OBJ (for example, a square pattern), the brightness is so small that it cannot be recognized on the retina image of the observer. There is a step. In such a case, a virtual contour (edge) is perceived between steps of brightness (for example, luminance), and the contour image P12 and the image P11 are perceived as one image. At this time, since binocular parallax occurs between the optical image IML viewed by the left eye L and the optical image IMR viewed by the right eye R, the virtual contour is slightly shifted and perceived as binocular parallax. Thus, the apparent depth position of the image P11 changes. Hereinafter, the optical image IML viewed by the left eye L and the optical image IMR viewed by the right eye R will be described in this order, and a mechanism for changing the apparent depth position of the image P11 will be described.
 図34は、本実施形態における観察者の眼に見える光学像IMの一例を示す模式図である。このうち、図34(L)は、観察者の左眼Lに見える光学像IMLの一例を示す模式図である。同図に示すように、位置ZVPにある視点位置VPの左眼Lの位置において、画像P11と輪郭画像P12とを見ると、エッジ画像PE1とエッジ部分E1とが、位置X2~位置X3の範囲において重なって見える。上述したように四角形のパターンのエッジ部分E1は位置X2にあるため、観察者の左眼Lには、エッジ画像PE1とエッジ部分E1とが、エッジ部分E1よりも四角形のパターンの内側(+X方向)の位置において重なって見える。また、位置ZVPにある視点位置VPの左眼Lの位置において、画像P11と輪郭画像P12とを見ると、エッジ画像PE2とエッジ部分E2とが、位置X5~位置X6の範囲において重なって見える。上述したように四角形のパターンのエッジ部分E2は位置X5にあるため、観察者の左眼Lには、エッジ画像PE2とエッジ部分E2とが、エッジ部分E2よりも四角形のパターンの外側(+X方向)の位置において重なって見える。 FIG. 34 is a schematic diagram illustrating an example of an optical image IM that can be seen by an observer's eye in the present embodiment. Among these, FIG. 34 (L) is a schematic diagram showing an example of an optical image IML that can be seen by the left eye L of the observer. As shown in the figure, in the position of the left eye L of the viewpoint position VP at the position Z VP, looking at the image P11 and the contour image P12, the edge image PE1 and the edge portion E1 is, the position X2 ~ position X3 Appears overlapping in range. As described above, since the edge portion E1 of the square pattern is located at the position X2, the edge image PE1 and the edge portion E1 are located on the left eye L of the observer inside the square pattern (in the + X direction) than the edge portion E1. ) Appear to overlap. Further, at the position of the left eye L of the viewpoint position VP at the position Z VP, looking at the image P11 and the contour image P12, edge image PE2 and the edge E2 is visible overlap in the region of the position X5 ~ position X6 . As described above, since the edge portion E2 of the square pattern is at the position X5, the edge image PE2 and the edge portion E2 are outside the square pattern (+ X direction) than the edge portion E2 in the left eye L of the observer. ) Appear to overlap.
 このとき、観察者の左眼Lの位置における光学像の明るさについて、図35を参照して説明する。
 図35は、本実施形態の視点位置VPにおける光学像IMの明るさの一例を示すグラフである。このうち、図35(L)は、視点位置VPの左眼Lの位置における光学像IMLの明るさの一例を示すグラフである。視点位置VPの左眼Lの位置においては、画像P11の明るさと、左眼Lの位置から見える画像(輪郭画像)P12Lの明るさとが合成された明るさの光学像IMLが生じる。ここで、画像P11としての四角形のパターンの視点位置VPから見た明るさ、および輪郭画像P12Lの視点位置VPから見た明るさの具体例について説明する。視点位置VPから見た四角形のパターンの内部の明るさは、明るさBR2である。また、視点位置VPから見た四角形のパターンの外部の明るさと、明るさ0(ゼロ)である。上述したように、位置Zにおける四角形のパターンのエッジ部分E1の位置は位置X2であり、エッジ部分E2の位置は位置X5である。したがって、四角形のパターンの視点位置VPから見た明るさは、位置X2~位置X5において明るさBR2であり、位置X2から(-X)方向、および位置X5から(+X)方向の位置において明るさ0(ゼロ)である。
At this time, the brightness of the optical image at the position of the left eye L of the observer will be described with reference to FIG.
FIG. 35 is a graph showing an example of the brightness of the optical image IM at the viewpoint position VP of the present embodiment. Among these, FIG. 35 (L) is a graph showing an example of the brightness of the optical image IML at the position of the left eye L at the viewpoint position VP. At the position of the left eye L at the viewpoint position VP, an optical image IML having a brightness obtained by combining the brightness of the image P11 and the brightness of the image (contour image) P12L seen from the position of the left eye L is generated. Here, specific examples of the brightness viewed from the viewpoint position VP of the square pattern as the image P11 and the brightness viewed from the viewpoint position VP of the contour image P12L will be described. The brightness inside the square pattern viewed from the viewpoint position VP is brightness BR2. Further, the brightness of the outside of the rectangular pattern viewed from the viewpoint position VP and the brightness 0 (zero). As described above, the position of the edge portion E1 of the pattern square at the position Z 1 is located X2, the position of the edge portion E2 is the position X5. Accordingly, the brightness viewed from the viewpoint position VP of the square pattern is the brightness BR2 at the positions X2 to X5, and the brightness at the positions from the position X2 to the (−X) direction and from the position X5 to the (+ X) direction. 0 (zero).
 また、輪郭画像P12のエッジ画像PE1およびエッジ画像PE2の視点位置VPから見た明るさは、明るさBR1である。上述したように、左眼Lの位置から見える輪郭画像P12Lのエッジ画像PE1は、位置X2~位置X3の範囲において、四角形のパターンのエッジ部分E1と重なるように表示される。また、輪郭画像P12Lのエッジ画像PE2は、位置X5~位置X6の範囲において、四角形のパターンのエッジ部分E2と重なるように表示される。したがって、輪郭画像P12Lの視点位置VPから見た明るさは、位置X2~X3および位置X5~位置X6において明るさBR1であり、その他のX方向の位置において明るさ0(ゼロ)である。 The brightness viewed from the viewpoint position VP of the edge image PE1 and the edge image PE2 of the contour image P12 is brightness BR1. As described above, the edge image PE1 of the contour image P12L seen from the position of the left eye L is displayed so as to overlap the edge portion E1 of the square pattern in the range of the position X2 to the position X3. Further, the edge image PE2 of the contour image P12L is displayed so as to overlap the edge portion E2 of the square pattern in the range of the position X5 to the position X6. Accordingly, the brightness viewed from the viewpoint position VP of the contour image P12L is brightness BR1 at positions X2 to X3 and positions X5 to X6, and brightness 0 (zero) at other positions in the X direction.
 したがって、画像P11と輪郭画像P12Lとを視点位置VPにおいて重ねて見た場合、その光学像IMLの明るさは、次に説明するようになる。すなわち光学像IMLの明るさは、位置X2~位置X3が明るさBR3、位置X3~位置X5が明るさBR2、位置X5~位置X6が明るさBR1、位置X2から(-X)方向、および位置X6から(+X)方向の位置において明るさ0(ゼロ)である。 Therefore, when the image P11 and the contour image P12L are viewed at the viewpoint position VP, the brightness of the optical image IML will be described next. That is, the brightness of the optical image IML is brightness BR3 from position X2 to position X3, brightness BR2 from position X3 to position X5, brightness BR1 from position X5 to position X6, (−X) direction from position X2, and position. The brightness is 0 (zero) at a position in the (+ X) direction from X6.
 次に、光学像IMLを見た観察者が画像P11の輪郭部を知覚する仕組みについて説明する。
 図36は、本実施形態における観察者が光学像IMLに基づいて知覚する画像P11の輪郭部の一例を示すグラフである。このうち、図36(L)は、観察者の左眼Lの位置における光学像IMLに基づいて、観察者が知覚する画像P11の輪郭部の一例を示すグラフである。ここで、画像P11の輪郭部とは、画像P11を示す光学像の部分のうち、明るさの変化が周囲の部分の明るさの変化に比して大きい部分である。視点位置VPにいる観察者の左眼Lの網膜上に結像された光学像IMLによって、観察者に認識される画像の明るさの分布は、図36(L)の波形WLのようになる。このとき、観察者は、認識した光学像IMLの明るさの変化が最大になる(つまり、波形WLの傾きが最大になる)X軸上の位置を、観察している画像P11の輪郭部であると知覚する。具体的には、光学像IMLを観察している観察者は、図36(L)に示すXELの位置(つまり、X軸の原点Oから距離LELの位置)を四角形のパターンの輪郭部であると知覚する。
Next, a mechanism in which the observer who has seen the optical image IML perceives the contour portion of the image P11 will be described.
FIG. 36 is a graph showing an example of a contour portion of the image P11 that is perceived by the observer in the present embodiment based on the optical image IML. Among these, FIG. 36 (L) is a graph showing an example of a contour portion of the image P11 perceived by the observer based on the optical image IML at the position of the left eye L of the observer. Here, the outline portion of the image P11 is a portion in which the change in brightness is larger than the change in brightness in the surrounding portions in the portion of the optical image that shows the image P11. The distribution of brightness of the image recognized by the observer by the optical image IML formed on the retina of the left eye L of the observer at the viewpoint position VP is as shown by a waveform WL in FIG. . At this time, the observer observes the position on the X-axis where the change in the brightness of the recognized optical image IML is maximized (that is, the gradient of the waveform WL is maximized) at the contour portion of the image P11 being observed. Perceived to be. Specifically, the observer observing the optical image IML sets the X EL position (that is, the position of the distance L EL from the origin O of the X axis) shown in FIG. Perceived to be
 ここまで、観察者の左眼Lに見える光学像IMLと、光学像IMLによる輪郭部の位置とについて説明した。次に、観察者の右眼Rに見える光学像IMRと、光学像IMRによる輪郭部の位置とについて説明する。 So far, the optical image IML that can be seen by the left eye L of the observer and the position of the contour portion by the optical image IML have been described. Next, the optical image IMR that can be seen by the observer's right eye R and the position of the contour portion by the optical image IMR will be described.
 図34(R)に示すように、位置ZVPにある視点位置VPの右眼Rの位置において、画像P11と輪郭画像P12とを見ると、エッジ画像PE1とエッジ部分E1とが、位置X1~位置X2の範囲において重なって見える。これは、左眼Lの位置において、エッジ画像PE1とエッジ部分E1とが重なって見える位置が、位置X2~位置X3の範囲であることと相違する。また、上述したように四角形のパターンのエッジ部分E1は位置X2にあるため、観察者の右眼Rには、エッジ画像PE1とエッジ部分E1とが、エッジ部分E1よりも四角形のパターンの外側(-X方向)の位置において重なって見える。これは、観察者の左眼Lには、エッジ画像PE1とエッジ部分E1とが、エッジ部分E1よりも四角形のパターンの内側(+X方向)の位置において重なって見えることと相違する。 As shown in FIG. 34 (R), at the position of the right eye R of the viewpoint position VP at the position Z VP, looking at the image P11 and the contour image P12, and the edge image PE1 and the edge portions E1, position X1 ~ It appears to overlap in the range of the position X2. This is different from the position X2 to the position X3 in which the edge image PE1 and the edge portion E1 appear to overlap each other at the position of the left eye L. Further, as described above, since the edge portion E1 of the rectangular pattern is at the position X2, the edge image PE1 and the edge portion E1 are located outside the rectangular pattern from the edge portion E1 in the right eye R of the observer ( -X direction) appears to overlap. This is different from the fact that the edge image PE1 and the edge portion E1 appear to overlap with the left eye L of the observer at a position inside the square pattern (in the + X direction) than the edge portion E1.
 また、位置ZVPにある視点位置VPの右眼Rの位置において、画像P11と輪郭画像P12とを見ると、エッジ画像PE2とエッジ部分E2とが、位置X4~位置X5の範囲において重なって見える。これは、左眼Lの位置において、画像P11と輪郭画像P12とを見ると、エッジ画像PE2とエッジ部分E2とが、位置X5~位置X6の範囲において重なって見えることと相違する。また、上述したように四角形のパターンのエッジ部分E2は位置X5にあるため、観察者の右眼Rには、エッジ画像PE2とエッジ部分E2とが、エッジ部分E2よりも四角形のパターンの内側(-X方向)の位置において重なって見える。これは、観察者の左眼Lには、エッジ画像PE2とエッジ部分E2とが、エッジ部分E2よりも四角形のパターンの外側(+X方向)の位置において重なって見えることと相違する。 Further, at the position of the right eye R of the viewpoint position VP at the position Z VP, looking at the image P11 and the contour image P12, edge image PE2 and the edge E2 is visible overlap in a range of positions X4 ~ position X5 . This is different from the fact that when the image P11 and the contour image P12 are viewed at the position of the left eye L, the edge image PE2 and the edge portion E2 appear to overlap in the range from the position X5 to the position X6. Further, as described above, since the edge portion E2 of the square pattern is located at the position X5, the edge image PE2 and the edge portion E2 are located on the right eye R of the observer inside the square pattern (see FIG. -X direction) appears to overlap. This is different from the fact that the left image L of the observer appears to overlap the edge image PE2 and the edge portion E2 at a position outside the square pattern (in the + X direction) with respect to the edge portion E2.
 このとき、観察者の右眼Rの位置における光学像の明るさについて、図35(R)を参照して説明する。図35(R)は、視点位置VPの右眼Rの位置における光学像IMRの明るさの一例を示すグラフである。視点位置VPの右眼Rの位置においては、画像P11の明るさと、右眼Rの位置から見える画像(輪郭画像)P12Rの明るさとが合成された明るさの光学像IMRが生じる。このうち、画像P11としての四角形のパターンの視点位置VPから見た明るさは、左眼Lの位置における明るさと同一である。輪郭画像P12Rの視点位置VPから見た明るさの具体例について説明する。輪郭画像P12Rのエッジ画像PE1およびエッジ画像PE2の視点位置VPから見た明るさは、明るさBR1である。上述したように、右眼Rの位置から見える輪郭画像P12Rのエッジ画像PE1は、位置X1~位置X2の範囲において、四角形のパターンのエッジ部分E1と重なるように表示される。これは、左眼Lの位置から見える輪郭画像P12Lのエッジ画像PE1が、位置X2~位置X3の範囲において、四角形のパターンのエッジ部分E1と重なるように表示されることと相違する。また、輪郭画像P12Rのエッジ画像PE2は、位置X4~位置X5の範囲において、四角形のパターンのエッジ部分E2と重なるように表示される。これは、輪郭画像P12Lのエッジ画像PE2が、位置X5~位置X6の範囲において、四角形のパターンのエッジ部分E2と重なるように表示されることと相違する。 At this time, the brightness of the optical image at the position of the right eye R of the observer will be described with reference to FIG. FIG. 35 (R) is a graph showing an example of the brightness of the optical image IMR at the position of the right eye R at the viewpoint position VP. At the position of the right eye R at the viewpoint position VP, an optical image IMR having a brightness obtained by combining the brightness of the image P11 and the brightness of the image (contour image) P12R seen from the position of the right eye R is generated. Among these, the brightness viewed from the viewpoint position VP of the square pattern as the image P11 is the same as the brightness at the position of the left eye L. A specific example of brightness viewed from the viewpoint position VP of the contour image P12R will be described. The brightness viewed from the viewpoint position VP of the edge image PE1 and the edge image PE2 of the contour image P12R is the brightness BR1. As described above, the edge image PE1 of the contour image P12R that can be seen from the position of the right eye R is displayed so as to overlap the edge portion E1 of the square pattern in the range from the position X1 to the position X2. This is different from the fact that the edge image PE1 of the contour image P12L seen from the position of the left eye L is displayed so as to overlap the edge portion E1 of the square pattern in the range of the position X2 to the position X3. Further, the edge image PE2 of the contour image P12R is displayed so as to overlap with the edge portion E2 of the square pattern in the range of the position X4 to the position X5. This is different from the fact that the edge image PE2 of the contour image P12L is displayed so as to overlap the edge portion E2 of the square pattern in the range of the position X5 to the position X6.
 したがって、輪郭画像P12Rの視点位置VPから見た明るさは、位置X1~X2および位置X4~位置X5において明るさBR1であり、その他のX方向の位置において明るさ0(ゼロ)である。これは、輪郭画像P12Lの視点位置VPから見た明るさが、位置X2~X3および位置X5~位置X6において明るさBR1であることと相違する。 Therefore, the brightness viewed from the viewpoint position VP of the contour image P12R is brightness BR1 at positions X1 to X2 and position X4 to position X5, and brightness 0 (zero) at other positions in the X direction. This is different from the fact that the brightness viewed from the viewpoint position VP of the contour image P12L is the brightness BR1 at the positions X2 to X3 and the positions X5 to X6.
 したがって、画像P11と輪郭画像P12Rとを視点位置VPにおいて重ねて見た場合、その光学像IMRの明るさは、次に説明するようになる。すなわち光学像IMRの明るさは、位置X1~位置X2が明るさBR1、位置X2~位置X4が明るさBR2、位置X4~位置X5が明るさBR3、位置X1から(-X)方向、および位置X5から(+X)方向の位置において明るさ0(ゼロ)である。これは、光学像IMLの明るさが、位置X2~位置X3が明るさBR3、位置X3~位置X5が明るさBR2、位置X5~位置X6が明るさBR1であることと相違する。 Therefore, when the image P11 and the contour image P12R are viewed at the viewpoint position VP, the brightness of the optical image IMR will be described next. That is, the brightness of the optical image IMR is brightness BR1 from position X1 to position X2, brightness BR2 from position X2 to position X4, brightness BR3 from position X4 to position X5, (−X) direction from position X1, and position. The brightness is 0 (zero) at the position in the (+ X) direction from X5. This is different from the brightness of the optical image IML in that the position X2 to position X3 is brightness BR3, the position X3 to position X5 is brightness BR2, and the position X5 to position X6 is brightness BR1.
 次に、光学像IMRを見た観察者が、画像P11の輪郭部を知覚する仕組みについて説明する。図36(R)は、観察者の右眼Rの位置における光学像IMRに基づいて、観察者が知覚する画像P11の輪郭部の一例を示すグラフである。視点位置VPにいる観察者の右眼Rの網膜上に結像された光学像IMRによって、観察者に認識される画像の明るさの分布は、図36(R)の波形WRのようになる。このとき、観察者は、認識した光学像IMRの明るさの変化が最大になる(つまり、波形WRの傾きが最大になる)X軸上の位置を、観察している画像P11の輪郭部であると知覚する。具体的には、光学像IMRを観察している観察者は、図36(R)に示すXERの位置(つまり、X軸の原点Oから距離LERの位置)を四角形のパターンの輪郭部であると知覚する。これは、光学像IMLを観察している観察者は、X軸の原点Oから距離LELの位置を四角形のパターンの輪郭部であると知覚することと相違する。 Next, a mechanism in which the observer who has seen the optical image IMR perceives the contour portion of the image P11 will be described. FIG. 36 (R) is a graph showing an example of the contour portion of the image P11 perceived by the observer based on the optical image IMR at the position of the right eye R of the observer. The distribution of brightness of the image recognized by the observer by the optical image IMR formed on the retina of the right eye R of the observer at the viewpoint position VP is as shown by a waveform WR in FIG. . At this time, the observer observes the position on the X-axis where the change in brightness of the recognized optical image IMR is maximized (that is, the gradient of the waveform WR is maximized) at the contour portion of the image P11 being observed. Perceived to be. Specifically, an observer is observing an optical image IMR, the position of the X ER shown in FIG. 36 (R) (i.e., the origin O of the distance L ER of position in the X-axis) contour of the square pattern Perceived to be This is different from the case where an observer observing the optical image IML perceives the position of the distance L EL from the origin O of the X axis as a rectangular pattern outline.
 これにより、観察者は、左眼Lが観察する四角形の輪郭部の位置XELと、右眼Rが観察する四角形の輪郭部の位置XERとを両眼視差として知覚する。そして、観察者は、輪郭部の両眼視差に基づいて四角形のパターンを立体像(3次元画像)として知覚する。 Thus, the viewer perceives the position X EL contour portion of square left eye L is observed, the position X ER of contour of the square right eye R is observed as binocular parallax. Then, the observer perceives the square pattern as a stereoscopic image (three-dimensional image) based on the binocular parallax of the contour portion.
 ここまで、表示装置2100が、視点位置VPにいるユーザ1に表示対象OBJの立体像を認識させる仕組みについて説明した。以下、この表示装置2100の輪郭補正部2013が輪郭画像を補正する構成について説明する。 So far, the mechanism in which the display device 2100 causes the user 1 at the viewpoint position VP to recognize the stereoscopic image of the display target OBJ has been described. Hereinafter, a configuration in which the contour correcting unit 2013 of the display device 2100 corrects the contour image will be described.
 図37A~37Dは、本実施形態の輪郭補正部2013が補正する輪郭画像の一例を示す模式図である。この輪郭補正部2013は、図29を参照して説明したように、視点位置VPからみた第1表示面2011Sが表示する立方体の稜線の位置Pt1に対応する第2表示面2012S上の位置Pt2と、輪郭画像P12の画素Px22の中心の位置Pt3とに基づいて、輪郭画像P12を補正する。すなわち、輪郭補正部2013は、第2表示部2012の画素のうち輪郭画像P12を表示する画素Px22(輪郭画素)の位置Pt3と、この画素Px22(輪郭画素)に対応する輪郭部の第1表示部2011上の位置Pt1および所定の視点位置VPに基づいて定められる第2表示部2012上の位置Pt2(輪郭位置)とに基づいて、輪郭画像P12を補正する。 37A to 37D are schematic diagrams illustrating an example of a contour image corrected by the contour correction unit 2013 of the present embodiment. As described with reference to FIG. 29, the contour correction unit 2013 includes a position Pt2 on the second display surface 2012S corresponding to the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP. The contour image P12 is corrected based on the center position Pt3 of the pixel Px22 of the contour image P12. That is, the contour correcting unit 2013 displays the position Pt3 of the pixel Px22 (contour pixel) displaying the contour image P12 among the pixels of the second display unit 2012 and the first display of the contour corresponding to the pixel Px22 (contour pixel). The contour image P12 is corrected based on the position Pt1 on the part 2011 and the position Pt2 (contour position) on the second display part 2012 determined based on the predetermined viewpoint position VP.
 ここで、位置Pt2とは、視点位置VPから第1表示面2011Sと第2表示面2012Sとを重ねて見た場合に、第1表示面2011Sが表示する輪郭部に対応する第2表示面2012S上の位置である。上述したように、第2表示面2012Sが表示する輪郭画像が、位置Pt2を中心にして表示されれば、画像P11と輪郭画像P12とが精密に位置合わせされていることになる。そこで、同図に示すように、位置Pt2が画素の中心の位置Pt3からずれている場合には、輪郭補正部2013は、画素Px11~画素Px33の画素値を補正する。例えば、画素Px22の周囲の画素(画素Px11~画素Px33)の画素値をXY平面上の距離に基づいて加重平均した場合に位置Pt2が重心位置になるように、輪郭補正部2013は、画素Px11~画素Px33の画素値を補正する。 Here, the position Pt2 is the second display surface 2012S corresponding to the contour displayed on the first display surface 2011S when the first display surface 2011S and the second display surface 2012S are viewed from the viewpoint position VP. It is the upper position. As described above, if the contour image displayed on the second display surface 2012S is displayed around the position Pt2, the image P11 and the contour image P12 are precisely aligned. Therefore, as shown in the figure, when the position Pt2 is deviated from the center position Pt3 of the pixel, the contour correcting unit 2013 corrects the pixel values of the pixels Px11 to Px33. For example, the contour correcting unit 2013 sets the pixel Px11 so that the position Pt2 becomes the center of gravity when the pixel values of the pixels around the pixel Px22 (pixels Px11 to Px33) are weighted and averaged based on the distance on the XY plane. -Correct the pixel value of the pixel Px33.
 具体的には、図37Aに示すように、位置Pt2が、画素Px22の中心の位置Pt3に対して(+Y)方向にずれている場合、輪郭補正部2013は、画素Px22の(+Y)方向に隣接している画素Px12の画素値を補正する。また、図37Bに示すように、位置Pt2が、画素Px22の中心の位置Pt3に対して(-Y)方向にずれている場合、輪郭補正部2013は、画素Px22の(-Y)方向に隣接している画素Px32の画素値を補正する。 Specifically, as illustrated in FIG. 37A, when the position Pt2 is shifted in the (+ Y) direction with respect to the center position Pt3 of the pixel Px22, the contour correction unit 2013 is in the (+ Y) direction of the pixel Px22. The pixel value of the adjacent pixel Px12 is corrected. As shown in FIG. 37B, when the position Pt2 is shifted in the (−Y) direction with respect to the center position Pt3 of the pixel Px22, the contour correcting unit 2013 is adjacent to the (−Y) direction of the pixel Px22. The pixel value of the current pixel Px32 is corrected.
 また、図37Cに示すように、位置Pt2が、画素Px22の中心の位置Pt3に対して(-X)方向にずれている場合、輪郭補正部2013は、画素Px22の(-X)方向に隣接している画素Px21の画素値を補正する。ここで、画素Px21は、補正前においても輪郭画像P12が示す輪郭部分を表示している画素である。このため、輪郭補正部2013は、画素Px21の補正前の輪郭画像P12の画素値に、画素Px22についての補正後の画素値を加えた画素値によって、画素Px21の画素値を補正する。同様にして、図37Dに示すように、位置Pt2が、画素Px22の中心の位置Pt3に対して(+X)方向にずれている場合、輪郭補正部2013は、画素Px22の(+X)方向に隣接している画素Px23の画素値を補正する。 As shown in FIG. 37C, when the position Pt2 is shifted in the (−X) direction with respect to the center position Pt3 of the pixel Px22, the contour correcting unit 2013 is adjacent to the (−X) direction of the pixel Px22. The pixel value of the current pixel Px21 is corrected. Here, the pixel Px21 is a pixel displaying the contour portion indicated by the contour image P12 even before correction. For this reason, the contour correcting unit 2013 corrects the pixel value of the pixel Px21 with a pixel value obtained by adding the corrected pixel value of the pixel Px22 to the pixel value of the contour image P12 before the correction of the pixel Px21. Similarly, as illustrated in FIG. 37D, when the position Pt2 is shifted in the (+ X) direction with respect to the center position Pt3 of the pixel Px22, the contour correction unit 2013 is adjacent to the (+ X) direction of the pixel Px22. The pixel value of the current pixel Px23 is corrected.
 このように輪郭補正部2013は、第2表示部2012の画素のうち輪郭画像P12を表示する画素Px22(輪郭画素)の周囲の(例えば、隣接する)画素の画素値を補正する。これにより、表示装置2100は、第1表示面2011Sが表示する画像P11と、第2表示面2012Sが表示する輪郭画像P12との位置合わせの精度を向上させることができる。 As described above, the contour correcting unit 2013 corrects the pixel values of pixels around (for example, adjacent to) the pixel Px22 (contour pixel) displaying the contour image P12 among the pixels of the second display unit 2012. Accordingly, the display device 2100 can improve the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S.
 なお、ここでは輪郭補正部2013は、輪郭画像P12を表示する画素Px22(輪郭画素)の周囲の(例えば、隣接する)画素の画素値を補正するとして説明したが、これに限られない。視点位置VPから第1表示面2011Sと第2表示面2012Sとを見たユーザ1によって、位置Pt2が位置Pt1に重なって知覚されるように補正すればよく、輪郭補正部2013は、画素Px22の近傍の画素の画素値を補正するように構成することもできる。ここで近傍の画素とは、必ずしも互いに隣接する画素でなくてもよい。例えば、近傍の画素とは、この画素に隣接する画素にさらに隣接する位置にある画素であってもよい。 In addition, although the outline correction | amendment part 2013 demonstrated as correcting the pixel value of the pixel (for example, adjacent) surrounding the pixel Px22 (contour pixel) which displays the outline image P12 here, it is not restricted to this. The user 1 viewing the first display surface 2011S and the second display surface 2012S from the viewpoint position VP may correct the position Pt2 so that the position Pt1 is perceived to overlap the position Pt1, and the contour correction unit 2013 can detect the pixel Px22. It can also be configured to correct the pixel values of neighboring pixels. Here, the neighboring pixels are not necessarily adjacent to each other. For example, the neighboring pixel may be a pixel at a position further adjacent to the pixel adjacent to the pixel.
 また、ここでは輪郭補正部2013は、位置Pt2が、画素Px22の中心の位置Pt3に対してずれている方向にある画素の画素値を補正する構成について説明したが、これに限られない。例えば、輪郭補正部2013は、位置Pt2が、画素Px22の中心の位置Pt3に対してずれている方向と逆の方向にある画素の画素値を補正してもよい。具体的には、図37Aにおいて、輪郭補正部2013は、位置Pt2が、画素Px22の中心の位置Pt3に対してずれている方向と逆の方向にある画素Px32の画素値を補正してもよい。
 また、例えば、輪郭補正部2013は、位置Pt2が、画素Px22の中心の位置Pt3に対してずれている方向に対して斜めの方向にある画素の画素値を補正してもよい。具体的には、図37Aにおいて、輪郭補正部2013は、位置Pt2が、画素Px22の中心の位置Pt3に対してずれている方向に対して斜めの方向にある画素Px11の画素値を補正してもよい。また、この場合、輪郭補正部2013は、位置Pt2が、画素Px22の中心の位置Pt3に対してずれている方向に対して斜めの方向にある画素Px13、画素Px31、または画素Px33のいずれの画素の画素値を補正してもよい。また、輪郭補正部2013は、これら位置Pt3に対してずれている方向にある画素、逆の方向にある画素、および斜めの方向にある画素のうち、いずれかの画素、もしくはすべての画素を組み合わせて、これら複数の画素の画素値を補正してもよい。
Although the contour correcting unit 2013 has described the configuration for correcting the pixel value of the pixel in the direction in which the position Pt2 is deviated from the center position Pt3 of the pixel Px22, the present invention is not limited to this. For example, the contour correcting unit 2013 may correct the pixel value of the pixel in the direction opposite to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22. Specifically, in FIG. 37A, the contour correction unit 2013 may correct the pixel value of the pixel Px32 in the direction opposite to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22. .
Further, for example, the contour correcting unit 2013 may correct the pixel value of a pixel that is in an oblique direction with respect to the direction in which the position Pt2 is deviated from the center position Pt3 of the pixel Px22. Specifically, in FIG. 37A, the contour correction unit 2013 corrects the pixel value of the pixel Px11 in the oblique direction with respect to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22. Also good. In this case, the contour correction unit 2013 also selects any of the pixels Px13, Px31, and Px33 that are oblique to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22. These pixel values may be corrected. In addition, the contour correction unit 2013 combines any or all of the pixels in the direction shifted from the position Pt3, the pixel in the opposite direction, and the pixel in the oblique direction. Thus, the pixel values of the plurality of pixels may be corrected.
 また、ここでは輪郭補正部2013は、1つの画素の画素値を補正するとして説明したが、これに限られない。視点位置VPから第1表示面2011Sと第2表示面2012Sとを見たユーザ1によって、位置Pt2が位置Pt1に重なって知覚されるように、輪郭補正部2013は、画素Px22の近傍の複数の画素の画素値を補正するように構成することもできる。 Further, although the contour correction unit 2013 has been described here as correcting the pixel value of one pixel, the present invention is not limited to this. The contour correction unit 2013 includes a plurality of pixels in the vicinity of the pixel Px22 so that the user 1 viewing the first display surface 2011S and the second display surface 2012S from the viewpoint position VP perceives the position Pt2 overlapping the position Pt1. It can also be configured to correct the pixel value of the pixel.
 次に、図38を参照して、表示装置2100の動作について説明する。
 図38は、本実施形態の表示装置2100の動作の一例を示すフローチャートである。
Next, the operation of the display device 2100 will be described with reference to FIG.
FIG. 38 is a flowchart showing an example of the operation of the display device 2100 of the present embodiment.
 第1表示部2011は、画像供給装置2002から画像情報D11を取得し、第1表示面2011Sに、画像情報D11に基づく画像P11を表示する(ステップS2010)。 The first display unit 2011 acquires the image information D11 from the image supply device 2002, and displays the image P11 based on the image information D11 on the first display surface 2011S (step S2010).
 次に、輪郭補正部2013は、画像供給装置2002から画像情報D12を取得する(ステップS2020)。 Next, the contour correction unit 2013 acquires the image information D12 from the image supply device 2002 (step S2020).
 次に、輪郭補正部2013は、位置Pt2と、位置Pt3とに基づいて、取得した画像情報D12を補正した補正画像情報D12Cを生成する。ここで、輪郭補正部2013は、輪郭位置である位置Pt2と、画素の中心の位置である位置Pt3とを比較し、画素の中心の位置に対する輪郭位置のずれの方向の判定を行う(ステップS2030)。輪郭補正部2013は、位置Pt2が、画素の中心の位置である位置Pt3を基準にして(+Y)方向にずれていると判定した場合(ステップS2030-UP)には、画素Px12の画素値を補正する。また、輪郭補正部2013は、位置Pt2が、画素の中心の位置である位置Pt3を基準にして(-Y)方向にずれていると判定した場合(ステップS2030-DOWN)には、画素Px32の画素値を補正する。同様に、輪郭補正部2013は、位置Pt2が、位置Pt3を基準にして(-X)方向にずれていると判定した場合(ステップS2030-LEFT)、または(+X)方向にずれていると判定した場合(ステップS2030-RIGHT)には、それぞれ画素Px21、画素Px23の画素値を補正する。 Next, the contour correcting unit 2013 generates corrected image information D12C obtained by correcting the acquired image information D12 based on the position Pt2 and the position Pt3. Here, the contour correcting unit 2013 compares the position Pt2 that is the contour position with the position Pt3 that is the center position of the pixel, and determines the direction of the shift of the contour position with respect to the center position of the pixel (step S2030). ). If the contour correction unit 2013 determines that the position Pt2 is displaced in the (+ Y) direction with respect to the position Pt3 that is the center position of the pixel (step S2030-UP), the contour correction unit 2013 sets the pixel value of the pixel Px12. to correct. When the contour correcting unit 2013 determines that the position Pt2 is shifted in the (−Y) direction with respect to the position Pt3 that is the center position of the pixel (step S2030—DOWN), the contour correcting unit 2013 determines the position of the pixel Px32. Correct the pixel value. Similarly, the contour correcting unit 2013 determines that the position Pt2 is shifted in the (−X) direction with respect to the position Pt3 (step S2030−LEFT), or is determined to be shifted in the (+ X) direction. In the case (step S2030-RIGHT), the pixel values of the pixel Px21 and the pixel Px23 are corrected.
 次に、第2表示部2012は、輪郭補正部2013から補正画像情報D12Cを取得し、第2表示面2012Sに、補正画像情報D12Cに基づく輪郭画像P12を表示する(ステップS2040)。表示装置2100は、これらステップS2010からステップS2040を繰り返して、輪郭画像P12を補正しつつ、画像P11および輪郭画像P12の表示を行う。 Next, the second display unit 2012 acquires the corrected image information D12C from the contour correcting unit 2013, and displays the contour image P12 based on the corrected image information D12C on the second display surface 2012S (step S2040). The display device 2100 repeats these steps S2010 to S2040 to display the image P11 and the contour image P12 while correcting the contour image P12.
 以上、説明したように、本実施形態の表示装置2100は、第1表示部2011と、第2表示部2012と、輪郭補正部2013とを備えている。第1表示部2011は、表示対象の画像を第1奥行き位置に表示する。また、第2表示部2012は、2次元に配列された複数の画素を有し、第1奥行き位置とは異なる第2奥行き位置に、表示対象の輪郭部を示す輪郭画像を表示する。また、輪郭補正部2013は、第2表示部2012の画素のうち輪郭画像を表示する輪郭画素の位置と、その輪郭画素に対応する輪郭部の第1表示部2011上の位置および所定の視点位置VPに基づいて定められる第2表示部2012上の輪郭位置とに基づいて、輪郭画像を補正する。 As described above, the display device 2100 of this embodiment includes the first display unit 2011, the second display unit 2012, and the contour correction unit 2013. The first display unit 2011 displays an image to be displayed at the first depth position. In addition, the second display unit 2012 has a plurality of pixels arranged two-dimensionally, and displays a contour image indicating a contour portion to be displayed at a second depth position different from the first depth position. The contour correction unit 2013 also includes the position of the contour pixel that displays the contour image among the pixels of the second display unit 2012, the position of the contour corresponding to the contour pixel on the first display unit 2011, and a predetermined viewpoint position. The contour image is corrected based on the contour position on the second display unit 2012 determined based on the VP.
 上述したように、第1表示面2011S、および第2表示面2012Sは、例えば、液晶ディスプレイや液晶プロジェクタのスクリーンであり、2次元に配列された画素を有している。
 この画素には面積があるため、視点位置VPからみた第1表示面2011Sが表示する立方体の稜線の位置Pt1に対応する位置Pt2と、画素の中心の位置Pt3とにずれが生じる場合がある。この位置Pt2と、画素の中心の位置Pt3とにずれが生じると、第2表示面2012Sが表示する輪郭画像P12と、第1表示面2011Sが表示する画像P11との位置合わせの精度が低下する。この場合には、ユーザ1に精度のよい立体像を知覚させることができなくなってしまう。そこで、本実施形態の輪郭補正部2013は、位置Pt2と、画素の中心の位置Pt3とにずれに基づいて、輪郭画像P12が示す輪郭(例えば、立方体の稜線)の位置を補正する。これにより、表示装置2100は、第2表示面2012Sが表示する輪郭画像P12と、第1表示面2011Sが表示する画像P11との位置合わせの精度が低下する程度を低減する。このようにすることで、表示装置2100は、ユーザ1に精度の高い立体像を知覚させることができる。
As described above, the first display surface 2011S and the second display surface 2012S are, for example, a screen of a liquid crystal display or a liquid crystal projector, and have pixels arranged two-dimensionally.
Since this pixel has an area, there may be a deviation between the position Pt2 corresponding to the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP and the position Pt3 of the center of the pixel. When the position Pt2 is shifted from the center position Pt3 of the pixel, the alignment accuracy between the contour image P12 displayed on the second display surface 2012S and the image P11 displayed on the first display surface 2011S is lowered. . In this case, the user 1 cannot perceive an accurate stereoscopic image. Therefore, the contour correction unit 2013 of the present embodiment corrects the position of the contour (for example, a cubic ridge line) indicated by the contour image P12 based on the shift between the position Pt2 and the center position Pt3 of the pixel. Thereby, the display device 2100 reduces the degree to which the accuracy of alignment between the contour image P12 displayed on the second display surface 2012S and the image P11 displayed on the first display surface 2011S is reduced. In this way, the display device 2100 can cause the user 1 to perceive a highly accurate stereoscopic image.
 また、輪郭補正部2013は、画素Px22(輪郭画素)の中心の位置Pt3と位置Pt2(輪郭位置)との距離に基づく補正量によって、画素Px22(輪郭画素)の近傍の画素の画素値を補正してもよい。具体的には、位置Pt2と位置Pt3とが、距離ΔPtだけ離れている場合には、輪郭補正部2013は、距離ΔPtに応じた補正量によって、補正する画素の画素値を設定する。例えば、輪郭補正部2013は、位置Pt2が画素の中心の位置Pt3から離れるほど、補正量を大きくして補正する画素の画素値を設定する。これにより、輪郭補正部2013は、補正の精度を向上させることができる。よって、表示装置2100は、ユーザ1にさらに精度のよい立体像を知覚させることができる。 The contour correction unit 2013 corrects the pixel value of the pixel in the vicinity of the pixel Px22 (contour pixel) based on the correction amount based on the distance between the center position Pt3 and the position Pt2 (contour position) of the pixel Px22 (contour pixel). May be. Specifically, when the position Pt2 and the position Pt3 are separated by the distance ΔPt, the contour correcting unit 2013 sets the pixel value of the pixel to be corrected by the correction amount according to the distance ΔPt. For example, the contour correction unit 2013 sets the pixel value of the pixel to be corrected by increasing the correction amount as the position Pt2 is farther from the center position Pt3 of the pixel. Thereby, the outline correction | amendment part 2013 can improve the precision of correction | amendment. Therefore, the display device 2100 can cause the user 1 to perceive a more accurate stereoscopic image.
 [変形例]
 また、輪郭補正部2013は、ユーザ1の左眼Lの位置に基づいて定められる第1輪郭位置Pt2Lと、ユーザ1の右眼Rの位置に基づいて定められる第2輪郭位置Pt2Rと、画素Px22(輪郭画素)の位置Pt3とに基づいて、輪郭画像P12を補正する構成であってもよい。
[Modification]
The contour correction unit 2013 also includes a first contour position Pt2L determined based on the position of the left eye L of the user 1, a second contour position Pt2R determined based on the position of the right eye R of the user 1, and a pixel Px22. The configuration may be such that the contour image P12 is corrected based on the position Pt3 of (contour pixel).
 図39は、本実施形態の変形例によるユーザ1と表示装置2100との位置関係の一例を示す模式図である。上述したように、第1表示面2011Sは、立方体の画像P11を表示している。第2表示面2012Sは、立方体の輪郭画像P12を表示している。ユーザ1は、奥行き位置ZVPにある視点位置VPから奥行き位置Zにある第1表示面2011Sと、奥行き位置Zにある第2表示面2012Sとを重ねて見ている。この視点位置VPから見た場合、第1表示面2011Sが表示する立方体の稜線の位置Pt1と、第2表示面2012S上の位置Pt2とが重なって見える。ここで、ユーザ1の左眼Lと右眼Rとでは、第1表示面2011Sが表示する立方体の稜線の位置Pt1に重なって見える、第2表示面2012S上の位置が互いに異なる。具体的には、ユーザ1の左眼Lの位置から見ると、第1表示面2011Sが表示する立方体の稜線の位置Pt1と、第2表示面2012S上の位置Pt2Lとが重なって見える。また、ユーザ1の右眼Rの位置から見ると、第1表示面2011Sが表示する立方体の稜線の位置Pt1と、第2表示面2012S上の位置Pt2Rとが重なって見える。 FIG. 39 is a schematic diagram illustrating an example of a positional relationship between the user 1 and the display device 2100 according to a modification of the present embodiment. As described above, the first display surface 2011S displays the cubic image P11. The second display surface 2012S displays a cubic outline image P12. The user 1 sees overlapped with the first display surface 2011S in the depth position Z 1 from the viewpoint position VP in the depth position Z VP, and a second display surface 2012S in the depth position Z 2. When viewed from this viewpoint position VP, the position Pt1 of the cube ridgeline displayed on the first display surface 2011S and the position Pt2 on the second display surface 2012S appear to overlap. Here, the left eye L and the right eye R of the user 1 are different from each other in the position on the second display surface 2012S that appears to overlap the position Pt1 of the cube ridgeline displayed on the first display surface 2011S. Specifically, when viewed from the position of the left eye L of the user 1, the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S and the position Pt2L on the second display surface 2012S appear to overlap. When viewed from the position of the right eye R of the user 1, the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S and the position Pt2R on the second display surface 2012S appear to overlap.
 ここで、輪郭補正部2013は、第2表示面2012S上の位置Pt2Lと、位置Pt2RとのXY平面上における中点の位置を、上述した画素値の補正の基準とする位置Pt2(輪郭位置)に設定して、輪郭画像P12を補正する。 Here, the contour correcting unit 2013 uses the position of the midpoint on the XY plane between the position Pt2L on the second display surface 2012S and the position Pt2R as a reference for correcting the pixel value described above (contour position). To correct the contour image P12.
 このように輪郭補正部2013は、ユーザ1の左眼Lおよび右眼Rの位置に基づいて、輪郭画像P12を補正することができるため、補正の精度を向上させることができる。よって、表示装置2100は、ユーザ1にさらに精度のよい立体像を知覚させることができる。 As described above, the contour correcting unit 2013 can correct the contour image P12 based on the positions of the left eye L and the right eye R of the user 1, so that the correction accuracy can be improved. Therefore, the display device 2100 can cause the user 1 to perceive a more accurate stereoscopic image.
 [第6の実施形態]
 次に、本発明の第6の実施形態による表示装置2100aについて、図40を参照して説明する。本実施形態の表示装置2100aは、視点位置VPを検出する検出部2014を備える点において、上述した実施形態と相違する。なお、上述した実施形態と同一の構成については、同一の符号を付してその説明を省略する。
[Sixth Embodiment]
Next, a display device 2100a according to a sixth embodiment of the present invention will be described with reference to FIG. The display device 2100a of the present embodiment is different from the above-described embodiment in that the display device 2100a includes a detection unit 2014 that detects the viewpoint position VP. In addition, about the structure same as embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
 図40は、本発明の第6の実施形態による表示装置2100aの構成の一例を示す模式図である。この表示装置2100aは、輪郭補正部2013aと、検出部2014とを備えている。 FIG. 40 is a schematic diagram showing an example of the configuration of the display device 2100a according to the sixth embodiment of the present invention. The display device 2100a includes a contour correction unit 2013a and a detection unit 2014.
 検出部2014は、測距センサを備えており、ユーザ1がいる位置を検出して、検出した位置を視点位置VPとして、この視点位置VPを示す情報を輪郭補正部2013aに出力する。 The detection unit 2014 includes a distance measuring sensor, detects the position where the user 1 is, outputs the detected position as the viewpoint position VP, and outputs information indicating the viewpoint position VP to the contour correction unit 2013a.
 輪郭補正部2013aは、検出部2014が検出した視点位置VPを示す情報を取得し、取得した視点位置VPを示す情報に基づいて、第2表示部2012上の位置Pt2(輪郭位置)を算出する。そして、輪郭補正部2013aは、算出した位置Pt2と、輪郭画像P12を表示する画素Px22(輪郭画素)の位置Pt3とに基づいて、輪郭画像P12を補正する。すなわち、輪郭補正部2013aは、輪郭画素の位置Pt3と、輪郭画素に対応する輪郭部の第1表示部2011上の位置Pt1および検出された視点位置VPに基づいて定められる位置Pt2(輪郭位置)とに基づいて、輪郭画像P12を補正する。 The contour correction unit 2013a acquires information indicating the viewpoint position VP detected by the detection unit 2014, and calculates a position Pt2 (contour position) on the second display unit 2012 based on the information indicating the acquired viewpoint position VP. . Then, the contour correcting unit 2013a corrects the contour image P12 based on the calculated position Pt2 and the position Pt3 of the pixel Px22 (contour pixel) that displays the contour image P12. In other words, the contour correcting unit 2013a determines the position Pt2 (contour position) determined based on the position Pt3 of the contour pixel, the position Pt1 on the first display unit 2011 of the contour corresponding to the contour pixel, and the detected viewpoint position VP. Based on the above, the contour image P12 is corrected.
 これにより、表示装置2100aは、ユーザ1がいる位置を視点位置VPとして検出することができるため、例えば、ユーザ1が移動したとしても、そのユーザ1が移動した位置にあわせて輪郭画像P12を補正することができる。よって、表示装置2100aは、ユーザ1の位置が変化したとしても、第1表示面2011Sが表示する画像P11と、第2表示面2012Sが表示する輪郭画像P12との位置合わせの精度を向上させることができる。 Accordingly, the display device 2100a can detect the position where the user 1 is present as the viewpoint position VP. For example, even if the user 1 moves, the contour image P12 is corrected according to the position where the user 1 moves. can do. Therefore, even if the position of the user 1 changes, the display device 2100a improves the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S. Can do.
 なお、検出部2014は、ユーザ1の顔の中心位置を視点位置VPとして検出してもよい。
 例えば、検出部2014は、不図示のビデオカメラと、画像解析部とを備えており、このビデオカメラで撮像した画像を画像解析部によって解析することによりユーザ1の顔を抽出し、そのユーザ1の顔の中心位置を視点位置VPとして検出する。ここで、ユーザ1の顔の中心位置には、ユーザ1の顔の輪郭に基づいた重心位置や、ユーザ1の左眼Lの位置と右眼Rの位置との中点の位置が含まれる。これにより、表示装置2100aは、ユーザ1の顔の向きが変化しても、その顔の向きにあわせて視点位置VPを設定することができる。よって、表示装置2100aは、ユーザ1の位置のみならず顔の向きが変化したとしても、第1表示面2011Sが表示する画像P11と、第2表示面2012Sが表示する輪郭画像P12との位置合わせの精度を向上させることができる。
Note that the detection unit 2014 may detect the center position of the face of the user 1 as the viewpoint position VP.
For example, the detection unit 2014 includes a video camera (not shown) and an image analysis unit. The image analysis unit analyzes an image captured by the video camera and extracts the face of the user 1. Is detected as the viewpoint position VP. Here, the center position of the face of the user 1 includes the position of the center of gravity based on the outline of the face of the user 1 and the position of the midpoint between the position of the left eye L and the position of the right eye R of the user 1. Thereby, even if the orientation of the face of the user 1 changes, the display device 2100a can set the viewpoint position VP according to the orientation of the face. Therefore, the display device 2100a aligns the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S even when the orientation of the face as well as the position of the user 1 changes. Accuracy can be improved.
 また、輪郭補正部2013aは、検出部2014が検出したユーザ1までの距離に基づく補正量によって、輪郭画像P12を補正してもよい。ここで、ユーザ1が表示装置2100aに近い位置にいると、画像の詳細な部分がユーザ1に知覚されやすくなるため、第1表示面2011Sが表示する画像P11と、第2表示面2012Sが表示する輪郭画像P12との位置ずれが、ユーザ1に知覚されやすくなる。一方、ユーザ1が表示装置2100aから遠い位置にいると、画像の詳細な部分がユーザ1に知覚されにくくなるため、第1表示面2011Sが表示する画像P11と、第2表示面2012Sが表示する輪郭画像P12との位置ずれが、ユーザ1に知覚されにくくなる。 Further, the contour correcting unit 2013a may correct the contour image P12 with a correction amount based on the distance to the user 1 detected by the detecting unit 2014. Here, when the user 1 is in a position close to the display device 2100a, a detailed portion of the image is easily perceived by the user 1, so that the image P11 displayed on the first display surface 2011S and the second display surface 2012S are displayed. The position shift from the contour image P12 to be performed is easily perceived by the user 1. On the other hand, when the user 1 is at a position far from the display device 2100a, the detailed portion of the image is less likely to be perceived by the user 1, and thus the image P11 displayed on the first display surface 2011S and the second display surface 2012S are displayed. The positional deviation from the contour image P12 is less likely to be perceived by the user 1.
 そこで、輪郭補正部2013aは、検出部2014が検出したユーザ1までの距離が比較的大きい(例えば、所定のしきい値よりも大きい)場合には、補正量を小さくして、輪郭画像P12の補正を行う。また、輪郭補正部2013aは、検出部2014が検出したユーザ1までの距離がさらに大きい(例えば、所定のしきい値よりも大きい)場合には、補正を行わないようにしてもよい。一方、輪郭補正部2013aは、検出部2014が検出したユーザ1までの距離が比較的小さい(例えば、所定のしきい値よりも小さい)場合には、補正量を大きくして、輪郭画像P12の補正を行う。このように、輪郭補正部2013aは、検出部2014が検出したユーザ1までの距離に応じて輪郭画像P12の補正を行うことにより、ユーザ1の位置が変化したとしても、画像P11と、輪郭画像P12との位置ずれを、ユーザ1に知覚されにくくすることができる。よって、表示装置2100aは、ユーザ1の位置が変化したとしても、第1表示面2011Sが表示する画像P11と、第2表示面2012Sが表示する輪郭画像P12との位置合わせの精度を向上させることができる。 Therefore, when the distance to the user 1 detected by the detection unit 2014 is relatively large (for example, larger than a predetermined threshold value), the contour correction unit 2013a reduces the correction amount to reduce the contour image P12. Make corrections. The contour correction unit 2013a may not perform correction when the distance to the user 1 detected by the detection unit 2014 is further larger (for example, greater than a predetermined threshold value). On the other hand, when the distance to the user 1 detected by the detection unit 2014 is relatively small (for example, smaller than a predetermined threshold value), the contour correction unit 2013a increases the correction amount to increase the contour image P12. Make corrections. As described above, the contour correction unit 2013a corrects the contour image P12 according to the distance to the user 1 detected by the detection unit 2014, so that even if the position of the user 1 changes, the contour P11 and the contour image The positional deviation from P12 can be made difficult to be perceived by the user 1. Therefore, even if the position of the user 1 changes, the display device 2100a improves the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S. Can do.
 [第7の実施形態]
 次に、本発明の第7の実施形態による表示装置2100bについて、図41を参照して説明する。本実施形態の表示装置2100bは、第1表示部2011bと、第2表示部2012bと、第3表示部2015とを備える点において、上述した実施形態と相違する。なお、上述した実施形態と同一の構成については、同一の符号を付してその説明を省略する。
[Seventh Embodiment]
Next, a display device 2100b according to a seventh embodiment of the present invention will be described with reference to FIG. The display device 2100b of the present embodiment is different from the above-described embodiment in that it includes a first display unit 2011b, a second display unit 2012b, and a third display unit 2015. In addition, about the structure same as embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
 図41は、本発明の第7の実施形態による表示装置2100bの構成の一例を示す模式図である。この表示装置2100bは、第1表示部2011bと、第2表示部2012bと、第3表示部2015とを備えている。 FIG. 41 is a schematic diagram showing an example of the configuration of a display device 2100b according to the seventh embodiment of the present invention. The display device 2100b includes a first display unit 2011b, a second display unit 2012b, and a third display unit 2015.
 第1表示部2011bは、奥行き位置Zに画像を表示する第1表示面2011Sbを備えている。また、第2表示部2012bは、奥行き位置Zに画像を表示する第2表示面2012Sbを備えている。 The first display unit 2011b includes a first display surface 2011Sb for displaying an image in the depth position Z 1. The second display unit 2012b includes a second display surface 2012Sb for displaying an image in the depth position Z 2.
 上述した各実施形態においては、第1表示部2011、および第2表示部2012が、モノクロ(白黒)表示部またはカラー表示部のいずれであってもよい。一方、本実施形態においては、第1表示部2011b、および第2表示部2012bが、モノクロ表示部である。すなわち、第1表示部2011bは、モノクロ画像である画像P11bを第1表示面2011Sbに表示する。また、第2表示部2012bは、モノクロ画像である画像P12bを第2表示面2012Sbに表示する。ここで、モノクロ画像とは、色度および彩度の画素値がなく、明るさ(例えば、輝度)の画素値のみによって構成される画像である。このモノクロ画像には、白黒の2値画像または白灰黒のグレースケール画像が含まれる。この画像P11bには、表示対象OBJの画像が含まれる。すなわち、第1表示部2011bは、表示対象OBJの画像を含む画像P11bを第1表示面2011Sbに表示する。また、画像P12bには、表示対象OBJの輪郭部を示すエッジ画像PE’が含まれる。ここで、エッジ画像PE’とは、表示対象OBJの輪郭部を示すモノクロ画像である。すなわち、第2表示部2012bは、表示対象OBJの輪郭部を示すエッジ画像PE’を第2表示面2012Sbに表示する。第1表示部2011bおよび第2表示部2012bは、これらの画像を表示することにより、表示対象OBJの立体像を生じさせる。この表示対象OBJの立体像とは、モノクロの立体像である。 In each embodiment described above, the first display unit 2011 and the second display unit 2012 may be either a monochrome (monochrome) display unit or a color display unit. On the other hand, in the present embodiment, the first display unit 2011b and the second display unit 2012b are monochrome display units. That is, the first display unit 2011b displays an image P11b that is a monochrome image on the first display surface 2011Sb. The second display unit 2012b displays an image P12b that is a monochrome image on the second display surface 2012Sb. Here, the monochrome image is an image having only pixel values of brightness (for example, luminance) without pixel values of chromaticity and saturation. This monochrome image includes a monochrome binary image or a gray-grey image of white and black. The image P11b includes an image of the display target OBJ. That is, the first display unit 2011b displays the image P11b including the image of the display target OBJ on the first display surface 2011Sb. Further, the image P12b includes an edge image PE ′ that shows the contour portion of the display object OBJ. Here, the edge image PE ′ is a monochrome image showing the outline of the display object OBJ. That is, the second display unit 2012b displays the edge image PE ′ indicating the outline of the display target OBJ on the second display surface 2012Sb. The first display unit 2011b and the second display unit 2012b display these images to generate a stereoscopic image of the display target OBJ. The stereoscopic image of the display object OBJ is a monochrome stereoscopic image.
 第3表示部2015は、カラー画像である画像P15を表示するカラー表示部である。画像P15とは、画像P11bおよび画像P12bに対応する画像である。画像P15は、視点位置VPからユーザ1が各表示部を見た場合に、画像P11bおよび画像P12bに重なるようにして、奥行き位置Zに表示される。この第3表示部2015は、画像P15を表示することによって、モノクロ画像である画像P11bおよび画像P12bに対して色彩を与える。すなわち、視点位置VPからユーザ1がこれら各画像を重ねて見た場合に、モノクロ画像である画像P11bおよび画像P12bがカラー画像であるように見える。 The third display unit 2015 is a color display unit that displays an image P15 that is a color image. The image P15 is an image corresponding to the image P11b and the image P12b. Image P15 from the viewpoint position VP when the user 1 is viewed the display unit, so as to overlap the image P11b and image P12b, is displayed on the depth position Z 3. The third display unit 2015 gives a color to the images P11b and P12b, which are monochrome images, by displaying the image P15. That is, when the user 1 superimposes these images from the viewpoint position VP, the images P11b and P12b that are monochrome images appear to be color images.
 上述したように、画像P11bと画像P12bとの間に両眼視差が生じれば、立体像が生じる。したがって、画像P11bおよび画像P12bが、モノクロ画像であっても立体像が生じる。ここで、カラー画像よりもモノクロ画像の方が画素値の精度を向上させやすい場合には、カラー画像に代えてモノクロ画像を表示することにより両眼視差の精度を向上させることができる。すなわち、表示装置2100bは、いずれもモノクロ画像である画像P11b、および画像P12bを表示することにより、ユーザ1が知覚する立体像の精度を向上させることができる。一方、モノクロ画像よりもカラー画像の方が表示される情報量が多いことがある。この場合には、画像P11bおよび画像P12bによって生じさせた立体像に対して、画像P15、すなわちカラー画像を重ねて表示することにより、ユーザ1が知覚する立体像の精度を向上させつつ、立体像の情報量を多くすることができる。つまり、表示装置2100bは、立体像の精度を向上させつつ、立体像の情報量を多くして、ユーザ1に立体像を知覚させることができる。 As described above, if binocular parallax occurs between the image P11b and the image P12b, a stereoscopic image is generated. Therefore, a stereoscopic image is generated even if the image P11b and the image P12b are monochrome images. Here, when it is easier to improve the accuracy of pixel values in a monochrome image than in a color image, the accuracy of binocular parallax can be improved by displaying a monochrome image instead of a color image. That is, the display device 2100b can improve the accuracy of the stereoscopic image perceived by the user 1 by displaying the image P11b and the image P12b, both of which are monochrome images. On the other hand, a color image may display more information than a monochrome image. In this case, the stereoscopic image generated by the images P11b and P12b is displayed by overlapping the image P15, that is, the color image, thereby improving the accuracy of the stereoscopic image perceived by the user 1 and improving the stereoscopic image. The amount of information can be increased. In other words, the display device 2100b can increase the information amount of the stereoscopic image while improving the accuracy of the stereoscopic image, and can cause the user 1 to perceive the stereoscopic image.
 以上、本発明の実施形態を図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、本発明の趣旨を逸脱しない範囲で適宜変更を加えることができる。上述した各実施形態に記載の構成を組み合わせてもよい。 As mentioned above, although embodiment of this invention has been explained in full detail with reference to drawings, a concrete structure is not restricted to this embodiment and can be suitably changed in the range which does not deviate from the meaning of this invention. . You may combine the structure as described in each embodiment mentioned above.
 なお、上記の実施形態における表示システム100が備える各部、及び上記の実施形態における各表示装置(表示装置2100、2100a、2100b。以下、これらを総称して表示装置と記載する。)が備える各部は、専用のハードウェアにより実現されるものであってもよく、また、メモリー及びマイクロプロセッサにより実現させるものであってもよい。 In addition, each part with which the display system 100 in said embodiment is provided, and each part with which each display apparatus ( display apparatus 2100, 2100a, 2100b. These are generically described as a display apparatus) in said embodiment are provided. It may be realized by dedicated hardware, or may be realized by a memory and a microprocessor.
 なお、表示システム100が備える各部、及び表示装置が備える各部は、メモリー及びCPU(演算処理装置、中央演算装置)により構成され、表示システム100が備える各部、及び表示装置が備える各部の機能を実現するためのプログラムをメモリーにロードして実行することによりその機能を実現させるものであってもよい。 In addition, each part with which the display system 100 is provided, and each part with which a display apparatus is provided are comprised by memory and CPU (arithmetic processing apparatus, central processing unit), and implement | achieve the function of each part with which the display system 100 is provided, and each part with which a display apparatus is provided. The function may be realized by loading a program to be executed into a memory and executing the program.
 また、表示システム100が備える各部、及び表示装置が備える各部の機能を実現するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み取らせて、実行することにより、上述の各部による処理を行ってもよい。なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものとする。 Also, each unit included in the display system 100 and a program for realizing the function of each unit included in the display device are recorded on a computer-readable recording medium, and the computer system reads the program recorded on the recording medium. By executing, the processing by each of the above-described units may be performed. Here, the “computer system” includes an OS and hardware such as peripheral devices.
 また、「コンピュータシステム」は、WWWシステムを利用している場合であれば、ホームページ提供環境(あるいは表示環境)も含むものとする。
 また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短時間の間、動的にプログラムを保持するもの、その場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリーのように、一定時間プログラムを保持しているものも含むものとする。また上記プログラムは、前述した機能の一部を実現するためのものであってもよく、さらに前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるものであってもよい。
Further, the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
The “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system. Furthermore, the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line. In this case, it also includes those that hold a program for a certain period of time, such as a volatile memory inside a computer system serving as a server or client in that case. The program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
 100、100A、100B、100C 表示システム 10、10A、10B 表示装置 11、11A、11B、12、12B 表示部 2、2A、2B、2C 画像処理装置 210、210A、210B 輪郭補正部 220B、220C 立体画像生成部。
 2011…第1表示部、2012…第2表示部、2013、2013a…輪郭補正部、2014…検出部、2100、2100a…表示装置。
100, 100A, 100B, 100C Display system 10, 10A, 10B Display device 11, 11A, 11B, 12, 12B Display unit 2, 2A, 2B, 2C Image processing device 210, 210A, 210B Contour correction unit 220B, 220C Stereoscopic image Generator.
2011 ... 1st display part, 2012 ... 2nd display part, 2013, 2013a ... Contour correction part, 2014 ... Detection part, 2100, 2100a ... Display apparatus.

Claims (23)

  1.  表示対象を含む、第1画像データに基づく第1画像を表示する第1表示面と、
     前記表示対象を含む、第2画像データに基づく第2画像を表示する第2表示面と、
     前記第1表示面と前記第2表示面とを観察する観察者の位置を検出する検出部と、
     前記第2画像データのうち前記表示対象の輪郭部近傍の画像データを前記検出部で検出された前記観察者の位置に基づいて補正し、前記第2表示面に表示させる制御部とを備える
     ことを特徴とする表示装置。
    A first display surface for displaying a first image based on the first image data, including a display target;
    A second display surface for displaying a second image based on second image data, including the display object;
    A detection unit for detecting a position of an observer observing the first display surface and the second display surface;
    A control unit that corrects image data in the vicinity of the contour portion to be displayed among the second image data based on the position of the observer detected by the detection unit, and displays the correction on the second display surface. A display device.
  2.  請求項1に記載の表示装置であって、
     前記制御部は、第2画像データのうち前記表示対象の輪郭部に対応する画像データと前記表示対象の輪郭部近傍に対応する画像データとを補正する
     ことを特徴とする表示装置。
    The display device according to claim 1,
    The control unit corrects image data corresponding to a contour portion of the display target in the second image data and image data corresponding to the vicinity of the contour portion of the display target.
  3.  請求項1または請求項2に記載の表示装置であって、
     前記制御部は、第2画像データのうち前記表示対象の輪郭部に対応する画像データと前記表示対象の輪郭部近傍に対応する画像データとが前記第2表示面に表示されるように補正を行う
     ことを特徴とする表示装置。
    The display device according to claim 1 or 2,
    The controller corrects the second image data so that image data corresponding to the contour portion of the display target and image data corresponding to the vicinity of the contour portion of the display target are displayed on the second display surface. A display device characterized by performing.
  4.  請求項1から請求項3のいずれか一項に記載の表示装置であって、
     前記制御部は、前記観察者の位置において、前記第1表示面と前記第2表示面とに表示された表示対象が両眼視差によって立体表示されるように前記第2画像データを補正する
     ことを特徴とする表示装置。
    A display device according to any one of claims 1 to 3,
    The control unit corrects the second image data so that a display target displayed on the first display surface and the second display surface is stereoscopically displayed by binocular parallax at the position of the observer. A display device.
  5.  請求項1から請求項4の何れか一項に記載の表示装置であって、
     前記第1表示面は、前記第2表示面とは異なる奥行き位置に配置される
     ことを特徴とする表示装置。
    A display device according to any one of claims 1 to 4,
    The display device, wherein the first display surface is disposed at a depth position different from that of the second display surface.
  6.  請求項1から請求項5の何れか一項に記載の表示装置であって、
     前記第1画像データと前記第2画像データとは同一の画像データから生成される
     ことを特徴とする表示装置。
    A display device according to any one of claims 1 to 5,
    The display device, wherein the first image data and the second image data are generated from the same image data.
  7.  請求項1から請求項5の何れか一項に記載の表示装置であって、
     前記第2画像データは、前記表示対象のうち前記表示対象の輪郭部を示すデータのみを含む
     ことを特徴とする表示装置。 
    A display device according to any one of claims 1 to 5,
    The second image data includes only data indicating a contour portion of the display target among the display targets.
  8.  請求項1から請求項7の何れか一項に記載の表示装置であって、
     前記制御部は、前記第1画像データおよび前記第2画像データのうち少なくとも一方の画素値を補正し、前記観察者に知覚させる前記表示対象の奥行き位置を変化させる
     ことを特徴とする表示装置。
    A display device according to any one of claims 1 to 7,
    The control unit corrects at least one pixel value of the first image data and the second image data, and changes a depth position of the display target to be perceived by the observer.
  9.  請求項1から請求項8の何れか一項に記載の表示装置であって、
     前記第1表示面および前記第2表示面のうち少なくとも一方は、透過型の表示部である
     ことを特徴とする表示装置。
    A display device according to any one of claims 1 to 8,
    At least one of the first display surface and the second display surface is a transmissive display unit.
  10.  表示対象を含む、第1画像データに基づく第1画像を表示する第1表示面と、前記表示対象を含む、第2画像データに基づく第2画像を表示する第2表示面と、前記第1表示面と前記第2表示面とを観察する観察者の位置を検出する検出部と、を有する表示装置のコンピュータに、
     前記第2画像データのうち前記表示対象の輪郭部近傍の画像データを前記検出部で検出された前記観察者の位置に基づいて補正し、前記第2表示面に表示させる制御ステップ
     を実行させるためのプログラム。
    A first display surface for displaying a first image based on first image data including a display target, a second display surface for displaying a second image based on second image data including the display target, and the first A display unit having a detection unit that detects a position of an observer observing the display surface and the second display surface;
    In order to execute the control step of correcting the image data in the vicinity of the contour portion to be displayed in the second image data based on the position of the observer detected by the detection unit and displaying the corrected image data on the second display surface Program.
  11.  対象物を含む、第1画像データに基づく第1画像を表示する第1表示面と、
     前記第1表示面を観察する観察者と前記第1表示面との相対位置を検出する検出部と、
     前記第1画像データのうち前記対象物の輪郭近傍の画像データを前記検出部で検出された前記相対位置に基づいて補正し、前記第1表示面に表示させる制御部と、を備える
     ことを特徴とする表示装置。
    A first display surface for displaying a first image based on the first image data including an object;
    A detector for detecting a relative position between an observer observing the first display surface and the first display surface;
    A control unit that corrects image data in the vicinity of the contour of the object of the first image data based on the relative position detected by the detection unit, and displays the corrected data on the first display surface. Display device.
  12.  請求項11に記載の表示装置であって、
     前記観察者を撮影して画像を生成する撮影部とを備え、
     前記検出部は、前記撮影部によって生成された前記画像に基づいて、前記第1表示面を観察する観察者と前記第1表示面との相対位置を検出する
     ことを特徴とする表示装置。
    The display device according to claim 11,
    A photographing unit for photographing the observer and generating an image;
    The display unit detects a relative position between an observer who observes the first display surface and the first display surface based on the image generated by the photographing unit.
  13.  対象物を含む、第1画像データに基づく第1画像を表示する第1表示面と、前記第1表示面を観察する観察者と前記第1表示面との相対位置を検出する検出部と、を有する表示装置のコンピュータに、
     前記第1画像データのうち前記対象物の輪郭近傍の画像データを前記検出部で検出された前記相対位置に基づいて補正し、前記第1表示面に表示させる制御ステップ
     を実行させるためのプログラム。
    A first display surface that displays a first image based on first image data, including an object, a detection unit that detects a relative position between an observer observing the first display surface and the first display surface; In a display device computer having
    A program for executing a control step of correcting image data in the vicinity of the contour of the object in the first image data based on the relative position detected by the detection unit and displaying the corrected data on the first display surface.
  14.  表示対象の画像を第1奥行き位置に表示する第1表示部と、
     2次元に配列された複数の画素を有し、前記第1奥行き位置とは異なる第2奥行き位置に、前記表示対象の輪郭部を示す輪郭画像を表示する第2表示部と、
     前記第2表示部の画素のうち前記輪郭画像を表示する輪郭画素の位置と、前記輪郭画素に対応する前記輪郭部の前記第1表示部上の位置および所定の視点位置に基づいて定められる前記第2表示部上の輪郭位置とに基づいて、前記輪郭画像を補正する輪郭補正部と、
     を備えることを特徴とする表示装置。
    A first display for displaying an image to be displayed at a first depth position;
    A second display unit that has a plurality of pixels arranged two-dimensionally and displays a contour image indicating a contour portion of the display target at a second depth position different from the first depth position;
    The position of the contour pixel that displays the contour image among the pixels of the second display section, the position of the contour section corresponding to the contour pixel on the first display section, and a predetermined viewpoint position are determined. A contour correcting unit that corrects the contour image based on a contour position on the second display unit;
    A display device comprising:
  15.  前記輪郭画素の位置とは、前記輪郭画素の中心位置であって、
     前記輪郭補正部は、
     前記輪郭画素の中心位置と、前記輪郭位置とに基づいて、前記輪郭画素の近傍の画素の画素値を補正することにより前記輪郭画像を補正する
     ことを特徴とする請求項14に記載の表示装置。
    The position of the contour pixel is the center position of the contour pixel,
    The contour correction unit
    The display device according to claim 14, wherein the contour image is corrected by correcting a pixel value of a pixel in the vicinity of the contour pixel based on a center position of the contour pixel and the contour position. .
  16.  前記輪郭画素の近傍の画素とは、少なくとも前記輪郭画素に隣接する画素を含み、
     前記輪郭補正部は、
     前記輪郭画素の中心位置から前記輪郭位置への方向に基づいて、前記輪郭画素に隣接する画素の画素値を補正することにより前記輪郭画像を補正する
     ことを特徴とする請求項15に記載の表示装置。
    The pixel in the vicinity of the contour pixel includes at least a pixel adjacent to the contour pixel,
    The contour correction unit
    The display according to claim 15, wherein the contour image is corrected by correcting a pixel value of a pixel adjacent to the contour pixel based on a direction from a center position of the contour pixel to the contour position. apparatus.
  17.  前記輪郭補正部は、
     前記輪郭画素の中心位置と前記輪郭位置との距離に基づく補正量によって、前記輪郭画素の近傍の画素の画素値を補正する
     ことを特徴とする請求項15または請求項16に記載の表示装置。
    The contour correction unit
    The display device according to claim 15 or 16, wherein a pixel value of a pixel near the contour pixel is corrected by a correction amount based on a distance between a center position of the contour pixel and the contour position.
  18.  前記視点位置には、ユーザの左眼の位置と、右眼の位置とが含まれ、
     前記輪郭位置には、前記左眼の位置に基づいて定められる第1輪郭位置と、前記右眼の位置に基づいて定められる第2輪郭位置とが含まれ、
     前記輪郭補正部は、
     前記輪郭画素の位置と、前記第1輪郭位置および前記第2輪郭位置とに基づいて、前記輪郭画像を補正する
     ことを特徴とする請求項14から請求項17のいずれか一項に記載の表示装置。
    The viewpoint position includes the position of the left eye of the user and the position of the right eye,
    The contour position includes a first contour position determined based on the position of the left eye and a second contour position determined based on the position of the right eye,
    The contour correction unit
    The display according to any one of claims 14 to 17, wherein the contour image is corrected based on a position of the contour pixel, and the first contour position and the second contour position. apparatus.
  19.  前記視点位置を検出する検出部
     を備え、
     前記輪郭補正部は、前記輪郭画素の位置と、前記輪郭画素に対応する前記輪郭部の前記第1表示部上の位置および検出された前記視点位置に基づいて定められる前記輪郭位置とに基づいて、前記輪郭画像を補正する
     ことを特徴とする請求項14から請求項18のいずれか一項に記載の表示装置。
    A detection unit for detecting the viewpoint position;
    The contour correcting unit is based on the position of the contour pixel and the contour position determined based on the position of the contour corresponding to the contour pixel on the first display unit and the detected viewpoint position. The display device according to any one of claims 14 to 18, wherein the contour image is corrected.
  20.  前記検出部は、
     ユーザの顔の中心位置を前記視点位置として検出する
     ことを特徴とする請求項19に記載の表示装置。
    The detector is
    The display device according to claim 19, wherein a center position of a user's face is detected as the viewpoint position.
  21.  前記輪郭補正部は、
     前記第1表示部または前記第2表示部のいずれかの表示部と、前記視点位置との距離に基づく補正量によって、前記輪郭画像を補正する
     ことを特徴とする請求項14から請求項20のいずれか一項に記載の表示装置。
    The contour correction unit
    21. The contour image is corrected by a correction amount based on a distance between the display unit of either the first display unit or the second display unit and the viewpoint position. The display device according to any one of the above.
  22.  第1表示部が表示対象の画像を表示する第1奥行き位置とは異なる第2奥行き位置に第2表示部が表示する前記表示対象の輪郭部を示す輪郭画像について、前記第2表示部が有する2次元に配列された複数の画素のうち前記輪郭画像を表示する輪郭画素の位置と、前記輪郭画素に対応する前記輪郭部の前記第1表示部上の位置および所定の視点位置に基づいて定められる前記第2表示部上の輪郭位置とに基づいて、前記輪郭画像を補正する輪郭補正部
     を備えることを特徴とする表示装置。
    The second display unit has an outline image indicating the outline of the display target displayed by the second display unit at a second depth position different from the first depth position at which the first display unit displays the image to be displayed. It is determined based on the position of a contour pixel that displays the contour image among a plurality of pixels arranged in two dimensions, the position of the contour corresponding to the contour pixel on the first display unit, and a predetermined viewpoint position. A display device comprising: a contour correction unit that corrects the contour image based on a contour position on the second display unit.
  23.  コンピュータに、
     第1表示部が表示対象の画像を表示する第1奥行き位置とは異なる第2奥行き位置に第2表示部が表示する前記表示対象の輪郭部を示す輪郭画像について、前記第2表示部が有する2次元に配列された複数の画素のうち前記輪郭画像を表示する輪郭画素の位置と、前記輪郭画素に対応する前記輪郭部の前記第1表示部上の位置および所定の視点位置に基づいて定められる前記第2表示部上の輪郭位置とに基づいて、前記輪郭画像を補正する輪郭補正ステップ
     を実行させるためのプログラム。
    On the computer,
    The second display unit has an outline image indicating the outline of the display target displayed by the second display unit at a second depth position different from the first depth position at which the first display unit displays the image to be displayed. It is determined based on the position of a contour pixel that displays the contour image among a plurality of pixels arranged in two dimensions, the position of the contour corresponding to the contour pixel on the first display unit, and a predetermined viewpoint position. A program for executing a contour correcting step of correcting the contour image based on the contour position on the second display unit.
PCT/JP2014/051796 2013-01-31 2014-01-28 Image processing device, display device and program WO2014119555A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2013-017968 2013-01-31
JP2013017969 2013-01-31
JP2013017968 2013-01-31
JP2013-017969 2013-01-31

Publications (1)

Publication Number Publication Date
WO2014119555A1 true WO2014119555A1 (en) 2014-08-07

Family

ID=51262269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/051796 WO2014119555A1 (en) 2013-01-31 2014-01-28 Image processing device, display device and program

Country Status (1)

Country Link
WO (1) WO2014119555A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021518701A (en) * 2018-03-23 2021-08-02 ピーシーエムエス ホールディングス インコーポレイテッド Multifocal plane-based method (MFP-DIBR) for producing a stereoscopic viewpoint in a DIBR system
CN114078451A (en) * 2020-08-14 2022-02-22 京东方科技集团股份有限公司 Display control method and display device
US11893755B2 (en) 2018-01-19 2024-02-06 Interdigital Vc Holdings, Inc. Multi-focal planes with varying positions

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0927969A (en) * 1995-05-08 1997-01-28 Matsushita Electric Ind Co Ltd Method for generating intermediate image of plural images, parallax estimate method and device
JP2008042745A (en) * 2006-08-09 2008-02-21 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional display method
JP2010128450A (en) * 2008-12-01 2010-06-10 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional display object, three-dimensional image forming apparatus, method and program for forming three-dimensional image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0927969A (en) * 1995-05-08 1997-01-28 Matsushita Electric Ind Co Ltd Method for generating intermediate image of plural images, parallax estimate method and device
JP2008042745A (en) * 2006-08-09 2008-02-21 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional display method
JP2010128450A (en) * 2008-12-01 2010-06-10 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional display object, three-dimensional image forming apparatus, method and program for forming three-dimensional image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893755B2 (en) 2018-01-19 2024-02-06 Interdigital Vc Holdings, Inc. Multi-focal planes with varying positions
JP2021518701A (en) * 2018-03-23 2021-08-02 ピーシーエムエス ホールディングス インコーポレイテッド Multifocal plane-based method (MFP-DIBR) for producing a stereoscopic viewpoint in a DIBR system
CN114078451A (en) * 2020-08-14 2022-02-22 京东方科技集团股份有限公司 Display control method and display device
CN114078451B (en) * 2020-08-14 2023-05-02 京东方科技集团股份有限公司 Display control method and display device

Similar Documents

Publication Publication Date Title
KR101761751B1 (en) Hmd calibration with direct geometric modeling
JP6443654B2 (en) Stereoscopic image display device, terminal device, stereoscopic image display method, and program thereof
US6677939B2 (en) Stereoscopic image processing apparatus and method, stereoscopic vision parameter setting apparatus and method and computer program storage medium information processing method and apparatus
US9467685B2 (en) Enhancing the coupled zone of a stereoscopic display
US9848184B2 (en) Stereoscopic display system using light field type data
US20170295354A1 (en) Efficient determination of optical flow between images
JP5366547B2 (en) Stereoscopic display device
US20100091031A1 (en) Image processing apparatus and method, head mounted display, program, and recording medium
CN103562963A (en) Systems and methods for alignment, calibration and rendering for an angular slice true-3D display
JP2008090617A (en) Device, method and program for creating three-dimensional image
US9681122B2 (en) Modifying displayed images in the coupled zone of a stereoscopic display based on user comfort
CN107071382A (en) Stereoscopic display device
KR20120075829A (en) Apparatus and method for rendering subpixel adaptively
TW201322733A (en) Image processing device, three-dimensional image display device, image processing method and image processing program
JP2008085503A (en) Three-dimensional image processing apparatus, method and program, and three-dimensional image display device
US11720996B2 (en) Camera-based transparent display
JP2012079291A (en) Program, information storage medium and image generation system
JP2014110568A (en) Image processing device, image processing method, and program
US11568555B2 (en) Dense depth computations aided by sparse feature matching
WO2014119555A1 (en) Image processing device, display device and program
EP3929900A1 (en) Image generation device, head-mounted display, and image generation method
JP6509101B2 (en) Image display apparatus, program and method for displaying an object on a spectacle-like optical see-through type binocular display
TWI500314B (en) A portrait processing device, a three-dimensional portrait display device, and a portrait processing method
EP3930318A1 (en) Display device and image display method
US20140362197A1 (en) Image processing device, image processing method, and stereoscopic image display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14745714

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14745714

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP