WO2013001697A1 - Video processing device and video processing method - Google Patents

Video processing device and video processing method Download PDF

Info

Publication number
WO2013001697A1
WO2013001697A1 PCT/JP2012/002714 JP2012002714W WO2013001697A1 WO 2013001697 A1 WO2013001697 A1 WO 2013001697A1 JP 2012002714 W JP2012002714 W JP 2012002714W WO 2013001697 A1 WO2013001697 A1 WO 2013001697A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
display screen
image
observer
eye
Prior art date
Application number
PCT/JP2012/002714
Other languages
French (fr)
Japanese (ja)
Inventor
川口 謙一
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2013001697A1 publication Critical patent/WO2013001697A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/341Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/028Improving the quality of display appearance by changing the viewing angle properties, e.g. widening the viewing angle, adapting the viewing angle to the view direction
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0492Change of orientation of the displayed image, e.g. upside-down, mirrored

Definitions

  • the present invention relates to a video processing device and a video processing method, and more particularly to a video processing device and a video processing method for processing an input video so that an observer can observe the input video displayed on a display screen.
  • Patent Documents 1 and 2 Conventionally, a technique capable of simultaneously observing a plurality of videos on one display screen has been proposed (see, for example, Patent Documents 1 and 2).
  • one stereoscopic video is divided into a left-eye video and a right-eye video, and the left-eye video and the right-eye video are alternately displayed on one display screen.
  • a technique for observing using glasses equipped with a shutter that operates in synchronization with this display is applied.
  • the above-mentioned conventional technique displays a plurality of images in order, and while displaying a certain image, both eyes or one eye of glasses worn by an observer who wants to observe this image are displayed.
  • a plurality of observers can observe a plurality of images at the same time.
  • the present invention has been made to solve the above-described problem, and an object of the present invention is to provide a video processing apparatus and a video processing method capable of observing in an original direction to be observed regardless of the viewing direction.
  • a video processing apparatus is a video processing apparatus that processes an input video so that an observer can observe the input video displayed on a display screen.
  • a discriminating unit that discriminates the side of the display screen that is closest to the observation position of the observer, and a video rotating unit that generates a rotated video obtained by rotating the input video so that the side discriminated by the discriminating unit becomes a base
  • a video output unit that outputs the generated rotated video.
  • the video processing apparatus determines the side of the display screen that is closest to the observer's observation position, and generates and generates a rotated video by rotating the input video so that the determined side is the bottom side.
  • the rotation image is displayed on the display screen.
  • the video is displayed on the display screen such that the side of the display screen closest to the observer is the bottom side. For this reason, it is possible to observe in the original direction to be observed regardless of the observation direction.
  • the display device further includes a sensor that detects a signal emitted from a device used by the observer, and the determination unit detects the signal emitted from the device when the sensor detects the signal emitted from the device.
  • a side is determined as the side of the display screen closest to the observation position, and the video rotation unit is an image for an observer using the device, so that the side determined by the determination unit is a bottom side.
  • a rotated image is generated by rotating the input image.
  • the video processing apparatus includes a sensor that detects a signal emitted from a device used by an observer, and when the sensor detects the signal, the side of the display screen that is closest to the sensor is the bottom side.
  • a rotation image obtained by rotating the input image is generated.
  • the device is a device used by the observer, it is considered that the observer is observing near the sensor that has detected the device. For this reason, the image can be displayed on the display screen so that the side of the display screen closest to the sensor is the base, so that the observer can observe the image in the original observation direction.
  • the video rotation unit inputs, as the input video, a left-eye video that is a left-eye video for viewing a stereoscopic video and a right-eye video that is a right-eye video.
  • Generating a rotated left-eye image and a rotated right-eye image obtained by rotating the left-eye image and the right-eye image as the rotated image and the image output unit generates the generated rotated left-eye image.
  • the eye image and the rotated right eye image are output as the left eye image and right eye image for the observer.
  • the video processing apparatus inputs a left-eye video and a right-eye video for viewing a stereoscopic video, and rotates the left-eye video and the right-eye video. Generate and output video and rotating right-eye video. As a result, a stereoscopic image can be observed in the original direction to be observed regardless of the viewing direction.
  • the video output unit alternately outputs the rotating left-eye video and the rotating right-eye video at a constant time interval, so that the video image is alternately displayed on the display screen at the constant time interval. To display.
  • the video output unit alternately outputs the rotation left-eye image that is the left-eye image and the rotation right-eye image that is the right-eye image at regular time intervals, It is alternately displayed on the display screen at the fixed time interval. Accordingly, by viewing the left-eye video and the right-eye video alternately using the stereoscopic video viewing glasses, it is possible to observe the stereoscopic video in the original direction to be observed.
  • the determination unit uses the side of the display screen closest to the position of the glasses for viewing the stereoscopic image worn by the observer as the side of the display screen closest to the observation position. Determine.
  • the discriminating unit discriminates the side of the display screen that is closest to the position of the stereoscopic video viewing glasses worn by the observer, and rotated the input video so that the side becomes the bottom side. Generate rotating images.
  • the position of the eyeglasses becomes the observation position of the observer. For this reason, the image is displayed on the display screen so that the side of the display screen closest to the glasses is the base, so that the observer can observe the image in the original direction to be observed.
  • the glasses are shutter glasses that open and close the left-eye shutter and the right-eye shutter in synchronization with the time when the left-eye video and the right-eye video are displayed on the display screen.
  • the video processing apparatus further includes a video control unit that controls opening and closing of the left-eye shutter and the right-eye shutter of each shutter glasses worn by a plurality of observers, and the video control unit includes: When the left eye image for the observer at the observation position corresponding to the side determined by the determination unit is displayed on the display screen, the left eye of the shutter glasses worn by the observer at the observation position The shutter for the left eye of the shutter glasses worn by an observer other than the observer at the observation position, and the shutter for the right eye of the shutter glasses worn by the observer at the observation position is closed.
  • the observer at the observation position wears the The shutter for the right eye of the shutter glasses being opened, the shutter for the left eye of the shutter glasses worn by the observer at the observation position being closed, and the shutter being worn by an observer other than the observer at the observation position
  • the left eye shutter and right eye shutter of the glasses are closed.
  • the eyeglasses are shutter eyeglasses
  • the image processing device opens the left eye shutter of the observer's shutter glasses when the image for the left eye of the observer close to the determined side is displayed.
  • the right eye shutter is closed, and the left eye shutter and right eye shutter of the shutter glasses of the observer other than the observer are closed.
  • the image processing device opens the right eye shutter of the shutter glasses of the observer and closes the left eye shutter.
  • the left eye shutter and the right eye shutter of the shutter glasses of the observer other than the observer are closed.
  • the determination unit determines the side of the display screen that is closest to the position of the glasses worn by the observer as the side of the display screen that is closest to the observation position.
  • Shutter glasses for opening and closing a left eye shutter and a right eye shutter and the video processing device further includes a left eye shutter and a right eye shutter of each shutter glasses worn by a plurality of observers.
  • An image control unit that controls opening and closing, and the image control unit is configured to perform observation at the observation position when an image for the observer at the observation position corresponding to the side determined by the determination unit is displayed on the display screen.
  • the left eye shutter and the right eye shutter of the shutter glasses worn by the person are opened, and the left eye shutter and the right eye of the shutter glasses worn by an observer other than the observer at the observation position Close the shutter.
  • the eyeglasses are shutter eyeglasses
  • the video processing device displays an image for the observer at the observation position corresponding to the side of the display screen closest to the position of the eyeglasses worn by the observer on the display screen.
  • the left eye shutter and right eye shutter of the shutter glasses of the observer at the observation position are opened, and the left eye shutter and right eye shutter of the shutter glasses of the observer other than the observer at the observation position are opened. close.
  • the display screen having a rectangular shape is further provided.
  • the video processing apparatus includes a rectangular display screen.
  • a stereoscopic image can be observed in the original direction to be observed.
  • the senor further includes the rectangular display screen and detects the glasses at positions corresponding to any two of the four sides forming the outer periphery of the display screen.
  • the discriminating unit discriminates the side of the display screen that is closest to the sensor that has detected the spectacles as the side of the display screen that is closest to the observation position when the sensor detects the spectacles;
  • the image rotation unit generates a rotation image obtained by rotating the input image so that the side determined by the determination unit becomes a bottom side as the detected image for the observer wearing the glasses.
  • the video processing apparatus includes a rectangular display screen, and also includes sensors for detecting glasses near any two of the four sides of the display screen, and the sensors detect the glasses.
  • a rotation image is generated by rotating the input image so that the side of the display screen closest to the sensor that has detected the glasses is the bottom side.
  • the observer wears the glasses, if the glasses are arranged and detected near the two sides, the position of the glasses becomes the observation position of the observer. For this reason, in the two sides, the image is displayed on the display screen so that the side of the display screen closest to the glasses is the bottom side, so that the observer observes in the original direction to be observed. be able to.
  • the two sides are two opposite sides among the four sides forming the outer peripheral edge of the display screen, and the video rotation unit is configured to observe one of the observation positions corresponding to the two sides.
  • the input image for the observer at the position is rotated 180 degrees to generate the rotated image.
  • the video processing device rotates the input video for the observer at one of the observation positions corresponding to two opposite sides of the four sides of the display screen by 180 degrees, and displays the rotated video. Generate. That is, if two observers face each other across the display screen, the image observed by the two observers is upside down. For this reason, by rotating the input image for one of the observers by 180 degrees, both of the two observers can observe the original image to be observed.
  • the sensor further includes the rectangular display screen and detects the glasses at positions corresponding to any three of the four sides forming the outer periphery of the display screen.
  • the discriminating unit discriminates the side of the display screen that is closest to the sensor that has detected the spectacles as the side of the display screen that is closest to the observation position when the sensor detects the spectacles;
  • the video rotation unit rotates the input image for the observer at the observation position corresponding to the first side by 180 degrees
  • the determination unit When the side determined is the second side of the three sides, the input image for the observer at the observation position corresponding to the second side is rotated by 90 degrees to generate the rotated image.
  • the video processing apparatus includes a rectangular display screen and a sensor for detecting glasses near any three of the four sides of the display screen, and the display screen closest to the sensor that has detected the glasses. If the side is the first side of the three sides, the input image for the observer at the observation position near the first side is rotated 180 degrees, and if the side is the second side, The input image for the observer at the observation position is rotated by 90 degrees to generate a rotated image. That is, when each of the three observers is observing from three sides of the display screen, it is necessary to rotate the other two-person video to 180 degrees and 90 degrees with respect to the one-person video. is there. For this reason, by rotating the input image for the observer by 180 degrees and 90 degrees, all three observers can observe the original image to be observed.
  • the video rotation unit includes a video processing unit that generates the rotated video by adjusting the size of the input video rotated 90 degrees to the size of the display screen.
  • the video processing apparatus adjusts the size of the input video rotated 90 degrees to the size of the display screen, and generates a rotated video. That is, when the display screen is rectangular, when the image is rotated 90 degrees, the rotated image becomes a size different from the size of the display screen. For this reason, the observer can observe an image
  • the image processing unit performs letterboxing to reduce the size of the input image rotated by 90 degrees so that the entire image is displayed on the display screen. Adjust to.
  • the video processing device performs letterboxing that reduces the size of the input video rotated 90 degrees so that the entire video is displayed on the display screen.
  • the observer can observe the entire video by adjusting the size of the video that has been rotated to a size different from the size of the display screen by letterboxing.
  • the determination unit includes: When the sensor detects the glasses, the side of the display screen that is closest to the sensor that has detected the glasses is determined as the side of the display screen that is closest to the observation position, and the video rotation unit is When the side determined by the unit is the first side among the four sides, the input image for the observer at the observation position corresponding to the first side is rotated 180 degrees, and the side determined by the determination unit is the 4th side.
  • the input image for the observer at the observation position corresponding to the second side is rotated 90 degrees to the left, and the side determined by the determination unit is the first of the four sides.
  • the side determined by the determination unit is the first of the four sides.
  • Rotate 90 degrees to the right input image for the observer at the observation position corresponding to the edge it generates the rotation image.
  • the video processing apparatus includes a rectangular display screen and a sensor for detecting glasses near the four sides of the display screen, and the side of the display screen closest to the sensor that has detected the glasses is the four sides.
  • the input image for the observer at the observation position near the first side is rotated by 180 degrees
  • the side is the second side for the observer at the observation position near the second side Is rotated 90 degrees to the left
  • the input image for the observer at the observation position near the third side is rotated 90 degrees to the right to generate a rotated image.
  • the video for one person is compared with the video for the other three at 180 degrees, 90 degrees to the left, and 90 degrees to the right. Need to be rotated. Therefore, by rotating the input image for the observer to 180 degrees, 90 degrees to the left, and 90 degrees to the right, all three observers can observe the original image in the direction to be observed.
  • the video output unit further includes the display screen and an optical lens disposed on an upper surface on the observation side of the display screen, and the video output unit includes a plurality of videos including the rotated video generated by the video rotation unit.
  • a composite image obtained by being divided and combined into a plurality of regions is output, and the optical lens is configured to display the composite image displayed by being divided into a plurality of regions of the display screen from one side of the display screen. Refracts so that one of the plurality of images before division is observed.
  • the video processing apparatus includes a display screen and an optical lens, and the optical lens divides a composite video displayed by being divided into a plurality of areas of the display screen from one side of the display screen.
  • the image is refracted so that one of the plurality of previous images is observed.
  • a lenticular lens as an optical lens, one image can be observed from one side of the display screen. Therefore, the image should be observed in the original direction to be observed regardless of the observation direction. Can do.
  • the display screen is rectangular
  • the optical lens is one of the plurality of images from each of two opposite sides of the four sides forming the outer periphery of the display screen. Refracts the image so that is observed.
  • the display screen is rectangular, and the optical lens refracts the image so that one image of the plurality of images is observed from each of two opposite sides of the four sides of the display screen. Let For this reason, when an observer observes from the direction of the two sides, the image can be observed in the original direction to be observed.
  • the display screen is rectangular, and the optical lens is configured such that one of the plurality of images is observed from each of four directions forming an outer peripheral edge of the display screen. Refract the image.
  • the display screen is rectangular, and the optical lens refracts the image so that one image of the plurality of images is observed from each of the four sides of the display screen. For this reason, even when the observer observes from any direction of the four sides, the image can be observed in the original observation direction.
  • the senor further includes a sensor that detects a signal emitted by a controller used by the observer to control the video processing device, and the determination unit detects the signal emitted by the controller when the sensor detects:
  • the side of the display screen that is closest to the sensor is determined as the side of the display screen that is closest to the observation position, and the video rotation unit is determined as the video for the observer using the controller.
  • a rotated image is generated by rotating the input image so that the side becomes the bottom side.
  • the video processing apparatus includes a sensor that detects a signal generated by the controller used by the observer, and rotates the input video so that the side of the display screen that is closest to the sensor that has detected the controller is the base. Generate rotated images.
  • the said controller is an apparatus which an observer uses, it is thought that the observer is observing near the sensor which detected the said controller. For this reason, the image can be displayed on the display screen so that the side of the display screen closest to the sensor is the base, so that the observer can observe the image in the original observation direction.
  • the image processing apparatus further includes a camera that captures an image, and a person recognition unit that recognizes a person captured in the image captured by the camera and detects the person as an observer, and the determination unit includes the person recognition unit.
  • the side of the display screen that is closest to the observer detected is determined as the side of the display screen that is closest to the observation position, and the video rotation unit is an image for the observer detected by the person recognition unit, A rotated image is generated by rotating the input image so that the side determined by the determining unit is the bottom side.
  • the video processing apparatus recognizes a person shown in an image captured by the camera, detects it as an observer, and sets the side of the display screen closest to the detected observer as the observation position of the observer. Identify the side of the closest display screen and rotate the input video. That is, by capturing an image of the observer using a camera, an image is displayed on the display screen so that the side of the display screen closest to the observer is the bottom side. For this reason, it is possible to observe in the original direction to be observed regardless of the observation direction.
  • the information processing apparatus further includes at least one of a network interface, a memory card interface, an optical disk drive, and a tuner, and acquires an image signal, a decoding unit that decodes the acquired image signal, And a display screen for displaying the video decoded by the decoding unit and output from the video output unit.
  • the video processing apparatus acquires the video signal, decodes the acquired video signal, outputs the video, and displays it on the display screen.
  • the video processing apparatus can also be realized as a system such as a mobile information terminal (tablet terminal) that acquires a video signal, processes it, and displays it.
  • the present invention can be realized not only as such a video processing apparatus, but also as a video processing method in which processing of a characteristic processing unit included in the video processing apparatus is used as a step.
  • the present invention can also be realized as an integrated circuit including a characteristic processing unit included in such a video processing apparatus.
  • the present invention can also be realized as a program that causes a computer to execute characteristic processing included in the video processing method.
  • a program can be distributed via a recording medium such as a CD-ROM and a transmission medium such as the Internet.
  • an image processing apparatus such as a tablet terminal, it is possible to observe an image in an original direction to be observed regardless of the observation direction.
  • FIG. 1 is an external view showing an external appearance of a video processing apparatus according to Embodiment 1 of the present invention.
  • FIG. 2 is a block diagram showing a functional configuration of the video processing apparatus according to Embodiment 1 of the present invention.
  • FIG. 3 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit according to Embodiment 1 of the present invention.
  • FIG. 4 is a block diagram showing a functional configuration of the video rotation unit according to Embodiment 1 of the present invention.
  • FIG. 5 is a flowchart for explaining video processing performed by the video processing apparatus according to Embodiment 1 of the present invention.
  • FIG. 6 is a flowchart for explaining processing in which the determination unit according to Embodiment 1 of the present invention determines the side of the display screen.
  • FIG. 7 is a diagram for explaining processing in which the determination unit according to Embodiment 1 of the present invention determines the sides of the display screen.
  • FIG. 8 is a flowchart for explaining a process in which the video rotation unit according to Embodiment 1 of the present invention generates a rotated video.
  • FIG. 9 is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention.
  • FIG. 10A is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention.
  • FIG. 10B is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention.
  • FIG. 10A is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention.
  • FIG. 10B is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention
  • FIG. 10C is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention.
  • FIG. 11A is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention.
  • FIG. 11B is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention.
  • FIG. 11C is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention.
  • FIG. 12A is a diagram for describing a process in which the rotation control unit according to Embodiment 1 of the present invention rotates an input video.
  • FIG. 12A is a diagram for describing a process in which the rotation control unit according to Embodiment 1 of the present invention rotates an input video.
  • FIG. 12B is a diagram for describing processing in which the rotation control unit according to Embodiment 1 of the present invention rotates the input video.
  • FIG. 13A is a diagram for describing processing in which the rotation control unit according to Embodiment 1 of the present invention rotates an input image.
  • FIG. 13B is a diagram for describing processing in which the rotation control unit according to Embodiment 1 of the present invention rotates the input video.
  • FIG. 14 is a flowchart for explaining processing in which the video output unit according to Embodiment 1 of the present invention outputs a rotated video.
  • FIG. 15 is a diagram for explaining processing in which the video output unit according to Embodiment 1 of the present invention outputs a rotated video.
  • FIG. 13A is a diagram for describing processing in which the rotation control unit according to Embodiment 1 of the present invention rotates an input image.
  • FIG. 13B is a diagram for describing processing in which the rotation control unit according to Embodiment 1 of the present invention rotate
  • FIG. 16 is an external view showing the external appearance of the video processing apparatus according to Embodiment 2 of the present invention.
  • FIG. 17 is a block diagram illustrating a functional configuration of the stereoscopic video reproduction unit according to Embodiment 2 of the present invention.
  • FIG. 18 is a block diagram showing a functional configuration of the video rotation unit according to Embodiment 2 of the present invention.
  • FIG. 19 is a flowchart for explaining processing of generating a rotated image by the image rotation unit according to Embodiment 2 of the present invention.
  • FIG. 20 is a diagram for explaining processing in which the video rotation unit according to Embodiment 2 of the present invention generates a rotated video.
  • FIG. 20 is a diagram for explaining processing in which the video rotation unit according to Embodiment 2 of the present invention generates a rotated video.
  • FIG. 21 is a diagram for explaining a process in which the video output unit according to Embodiment 2 of the present invention outputs a video.
  • FIG. 22 is an external view showing the external appearance of the video processing apparatus according to Embodiment 3 of the present invention.
  • FIG. 23 is an external view showing the external appearance of the optical lens according to Embodiment 3 of the present invention.
  • FIG. 24 is a diagram illustrating a process in which the video processing apparatus according to the third embodiment of the present invention displays a video on the display screen.
  • FIG. 25 is an external view showing an external appearance of a video processing apparatus according to Modification 1 of Embodiment 3 of the present invention.
  • FIG. 26 is a diagram illustrating a process in which the video processing device according to the first modification of the third embodiment of the present invention displays a video on the display screen.
  • FIG. 27 is an external view showing an external appearance of a video processing apparatus according to Modification 2 of Embodiment 3 of the present invention.
  • FIG. 28 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit according to Embodiment 4 of the present invention.
  • FIG. 29 is a flowchart for explaining processing in which the stereoscopic video reproduction unit according to Embodiment 4 of the present invention determines the side of the display screen.
  • FIG. 30 is a diagram for describing processing in which the stereoscopic video reproduction unit according to Embodiment 4 of the present invention determines the sides of the display screen.
  • FIG. 31 is a diagram for explaining processing in which the stereoscopic video reproduction unit according to Embodiment 4 of the present invention determines the side of the display screen.
  • FIG. 32 is a flowchart for explaining a process in which the stereoscopic video reproduction unit according to the modification of the fourth embodiment of the present invention determines the side of the display screen.
  • FIG. 33 is a diagram for explaining a process in which the stereoscopic video reproduction unit according to the modification of the fourth embodiment of the present invention determines the side of the display screen.
  • FIG. 34 is a diagram for explaining processing in which the stereoscopic video reproduction unit according to the modification of the fourth embodiment of the present invention determines the side of the display screen.
  • FIG. 35 is a diagram showing a minimum configuration of a video processing apparatus according to an embodiment of the present invention and a modification thereof.
  • FIG. 1 is an external view showing the external appearance of a video processing apparatus 10 according to Embodiment 1 of the present invention.
  • the video processing apparatus 10 is a mobile information terminal such as a tablet terminal, and processes the input video so that an observer can observe the input video displayed on the display screen.
  • the video processing apparatus 10 includes a 3D glass sensor L111, a 3D glass sensor R112, a transmitter L151, a transmitter R152, and a display screen 600.
  • the glasses 20 are glasses worn by the observer A, and the left-eye shutter is synchronized with the time when the left-eye video and the right-eye video are displayed on the display screen 600.
  • the glasses 20 include a transmitter 21 and a receiver 22.
  • the glasses 30 are shutter glasses worn by the observer B, and include a transmitter 31 and a receiver 32.
  • the display screen 600 is a rectangular screen that displays an input video. That is, the display screen 600 has four sides on the outer periphery.
  • the 3D glass sensor L111 and the 3D glass sensor R112 are sensors that detect signals emitted from devices used by the observer. Specifically, the 3D glass sensor L111 and the 3D glass sensor R112 detect a signal emitted from the transmitter 21 of the glasses 20 or a signal emitted from the transmitter 31 of the glasses 30.
  • the signals emitted from the transmitters 21 and 31 are, for example, radio waves or infrared rays, but any signal may be used. Since detection of the signal depends on the distance between the sensor and the transmitter, whether the signal is detected as a strong signal, a weak signal, or not detected varies depending on the position of the glasses.
  • the signal emitted from the transmitter 21 of the glasses 20 is detected by both the 3D glass sensor L 111 and the 3D glass sensor R 112. In this case, it is detected as a stronger signal by the closer 3D glass sensor. Then, the video processing apparatus 10 performs control assuming that the observer of the glasses 20 exists near the 3D glass sensor detected as a stronger signal. The same applies to the glasses 30.
  • the glasses 20 are near the 3D glass sensor L111, and the glasses 30 are near the 3D glass sensor R112.
  • the signal emitted from the transmitter 21 of the glasses 20 is detected by the 3D glass sensor L111 and the signal emitted from the transmitter 31 of the glasses 30 is detected by the 3D glass sensor R112.
  • the 3D glass sensor L111 and the 3D glass sensor R112 are sensors that detect glasses that are arranged at positions corresponding to any two of the four sides forming the outer periphery of the display screen 600. is there.
  • the two sides are two sides facing each other among the four sides forming the outer peripheral edge of the display screen 600.
  • the transmitter L151 is a synchronization signal for opening and closing the left eye shutter and the right eye shutter of the glasses 20 in synchronization with the time when the left eye video and the right eye video are displayed on the display screen 600.
  • the glasses 20 open and close the left-eye shutter and the right-eye shutter according to the synchronization signal received by the receiver 22.
  • the transmitter R152 synchronizes the left eye shutter and the right eye shutter of the glasses 30 in synchronization with the time when the left eye video and the right eye video are displayed on the display screen 600.
  • a synchronization signal for opening and closing is issued to the receiver 32 of the glasses 30.
  • the glasses 30 open and close the left-eye shutter and the right-eye shutter according to the synchronization signal received by the receiver 32.
  • FIG. 2 is a block diagram showing a functional configuration of the video processing apparatus 10 according to Embodiment 1 of the present invention.
  • the video processing apparatus 10 includes a stereoscopic video playback unit 100, an acquisition unit 200, a selector 300, a decoding unit 400, an audio output unit 500, a user input unit 700, and a control unit 800.
  • the acquisition unit 200 includes at least one of the network interface 210, the memory card interface 220, the optical disk drive 230, and the tuner 240, and acquires a video / audio signal or a video-only signal. Specifically, the acquisition unit 200 acquires an encoded stream from a communication network or a broadcast wave via the network interface 210 or the tuner 240, or the video processing apparatus 10 via the memory card interface 220 or the optical disc drive 230. The encoded stream is acquired from the memory card or the optical disc inserted into the.
  • the selector 300 outputs the encoded stream acquired by the acquisition unit 200 to the decoding unit 400.
  • the decoding unit 400 decodes the encoded stream acquired by the acquisition unit 200.
  • the audio output unit 500 inputs an audio signal obtained by the decoding unit 400 decoding the encoded stream and outputs it as audio.
  • the user input unit 700 accepts an input operation by the user.
  • the control unit 800 uses the information obtained by the decoding unit 400 to decode the encoded stream and the information acquired by the user input unit 700 to use the stereoscopic video reproduction unit 100 to control information for controlling the stereoscopic video reproduction unit 100. Output to.
  • the stereoscopic video reproduction unit 100 acquires an input video obtained by the decoding unit 400 decoding the encoded stream, processes the input video to generate an output video, outputs the output video, and displays the output video on the display screen 600. Display. The detailed description of the stereoscopic video reproduction unit 100 will be described later.
  • the non-volatile memory 910 and the volatile memory 920 are memories that store information used for the video processing apparatus 10 to process video.
  • FIG. 3 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit 100 according to Embodiment 1 of the present invention.
  • the stereoscopic video reproduction unit 100 includes a determination unit 110, a video control unit 120, a video rotation unit 130, and a video output unit. 140 and a synchronization signal generation unit 150.
  • the discriminating unit 110 discriminates the side of the display screen 600 that is closest to the observer's observation position. Specifically, when the sensor detects a signal generated by the device used by the observer, the determination unit 110 determines the side of the display screen 600 closest to the sensor as the side of the display screen 600 closest to the observation position. To do.
  • the determination unit 110 determines the side of the display screen 600 that is closest to the position of the glasses for viewing the stereoscopic image worn by the observer as the side of the display screen 600 that is closest to the observation position. To do. That is, when the 3D glass sensor L111 detects the glasses 20, the determination unit 110 determines the side of the display screen 600 that is closest to the 3D glass sensor L111 that has detected the glasses 20 as the side of the display screen 600 that is closest to the observation position. It is determined as In addition, when the 3D glass sensor R112 detects the glasses 30, the determination unit 110 determines the side of the display screen 600 that is closest to the 3D glass sensor R112 that has detected the glasses 30 as the side of the display screen 600 that is closest to the observation position. It is determined as
  • the video control unit 120 controls the video rotation unit 130, the video output unit 140, and the synchronization signal generation unit 150. Specifically, the video control unit 120 outputs the sides determined by the determination unit 110 to the video rotation unit 130 to rotate the video, and gives a video output instruction to the video output unit 140.
  • the video control unit 120 causes the synchronization signal generation unit 150 to generate a synchronization signal, and controls the opening and closing of the left eye shutter and the right eye shutter of each shutter glasses worn by a plurality of observers. That is, the video control unit 120 controls the opening and closing of the left-eye shutter and the right-eye shutter for the glasses 20 worn by the observer A and the glasses 30 worn by the observer B.
  • the video control unit 120 observes at the observation position. Open the left eye shutter of the shutter glasses worn by the person, close the right eye shutter of the shutter eyeglasses worn by the observer at the observation position, and attach an observer other than the observer at the observation position. The left eye shutter and the right eye shutter of the shutter glasses are closed.
  • the image control unit 120 is worn by the observer at the observation position.
  • the shutter for the right eye of the shutter glasses being opened, the shutter for the left eye of the shutter glasses worn by the observer at the observation position being closed, and the shutter being worn by an observer other than the observer at the observation position
  • the left eye shutter and right eye shutter of the glasses are closed.
  • the synchronization signal generation unit 150 generates a synchronization signal for opening and closing the left eye shutter and the right eye shutter of the shutter glasses according to the instruction of the video control unit 120. Specifically, the synchronization signal generation unit 150 generates a synchronization signal for opening and closing the left eye shutter and the right eye shutter of the glasses 20 and 30 and generates the synchronization signals via the transmitter L151 and the transmitter R152. The synchronized signal is output to the glasses 20 and 30.
  • the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 becomes the bottom in accordance with an instruction from the image control unit 120.
  • the image rotation unit 130 is an input image for the observer at one of the observation positions corresponding to the two sides of the display screen 600 as an image for the observer wearing the detected glasses. Is rotated 180 degrees to generate a rotated image. This will be described in detail below.
  • FIG. 4 is a block diagram showing a functional configuration of the video rotation unit 130 according to Embodiment 1 of the present invention.
  • the video rotation unit 130 includes a video input / output unit 131, a rotation control unit 132, and a memory 133.
  • the video input / output unit 131 inputs, as input videos, a left-eye video that is a left-eye video and a right-eye video that is a right-eye video for viewing a stereoscopic video. Then, the video input / output unit 131 stores the input left-eye video and right-eye video in the memory 133.
  • the rotation control unit 132 reads the left-eye image and the right-eye image from the memory 133, and rotates the left-eye image and the right-eye image as the rotated image. Eye images are generated. Then, the rotation control unit 132 stores the generated rotation left eye image and rotation right eye image in the memory 133.
  • the video input / output unit 131 reads the rotated left-eye video and the rotated right-eye video from the memory 133, and outputs them to the video output unit 140.
  • the video rotation unit 130 does not include the rotation control unit 132, and when the video input / output unit 131 reads the video from the memory 133, the left-eye video and the right-eye video are rotated. The video and the rotating right eye video may be read out and output to the video output unit 140.
  • the video output unit 140 outputs the rotated video generated by the video rotation unit 130.
  • the video output unit 140 converts the rotated left-eye video and the rotated right-eye video generated by the video rotating unit 130 into a left-eye video and a right-eye video for the observer at regular time intervals.
  • the images are alternately displayed on the display screen 600 at the predetermined time interval.
  • the video output unit 140 alternately outputs a left-eye video and a right-eye video for the viewer A and the viewer B at regular time intervals in accordance with an instruction from the video control unit 120.
  • FIG. 5 is a flowchart for explaining video processing performed by the video processing apparatus 10 according to Embodiment 1 of the present invention.
  • the determination unit 110 determines the side of the display screen 600 that is closest to the observation position of the observer (S102). A detailed description of the process in which the determination unit 110 determines the sides of the display screen 600 will be described later.
  • the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 is the bottom side (S104).
  • S104 the side determined by the determination unit 110
  • the video output unit 140 outputs the rotated video generated by the video rotating unit 130 (S106). A detailed description of the process in which the video output unit 140 outputs the rotated video will be described later.
  • FIG. 6 is a flowchart for explaining processing in which the determination unit 110 according to Embodiment 1 of the present invention determines the sides of the display screen 600.
  • FIG. 7 is a diagram for explaining a process in which the determination unit 110 according to Embodiment 1 of the present invention determines the sides of the display screen 600.
  • the sensor detects a signal emitted by a device used by an observer (S202). Specifically, as illustrated in FIG. 7, the 3D glass sensor L ⁇ b> 111 detects the glasses 20, and the 3D glass sensor R ⁇ b> 112 detects the glasses 30.
  • determination part 110 discriminate
  • the determination unit 110 detects a signal generated by the transmitter 21 of the glasses 20 when the 3D glass sensor L111 detects a signal generated by the transmitter 21 of the glasses 20 or a signal generated by the transmitter 21 of the glasses 20. If the detection is stronger than that, the side v1-v2 that is the side of the display screen 600 closest to the 3D glass sensor L111 that has detected the glasses 20 is determined as the side of the display screen 600 that is closest to the observation position of the observer A. .
  • the determination unit 110 detects the glasses 30.
  • the side v3-v4 that is the side of the display screen 600 that is closest to the 3D glass sensor R112 that has detected is determined as the side of the display screen 600 that is closest to the observation position of the viewer B.
  • FIG. 8 is a flowchart for explaining a process in which the video rotation unit 130 according to Embodiment 1 of the present invention generates a rotated video.
  • FIG. 9 to FIG. 13B are diagrams for explaining processing for generating a rotated image by the image rotation unit 130 according to Embodiment 1 of the present invention.
  • the video input / output unit 131 of the video rotation unit 130 inputs a left-eye video and a right-eye video as input video (S302).
  • FIGS. 9 to 11C are diagrams for explaining an input video input by the video input / output unit 131 according to Embodiment 1 of the present invention.
  • a camera AL51 that captures an image for the left eye of the observer A, a camera AR52 that captures an image for the right eye of the observer A, and an image for the left eye of the observer B are captured.
  • the object 40 is imaged by the camera BL53 that performs this operation and the camera BR54 that captures an image for the right eye of the observer B.
  • the image shown in FIG. 10A is captured as the image for the left eye of the observer A
  • the image shown in FIG. 10B is captured as the image for the right eye of the observer A.
  • the observer A can observe a stereoscopic image as shown in FIG. 10C.
  • FIG. 11A is captured as an image for the left eye of the observer B, and an image illustrated in FIG. 11B is captured as an image for the right eye of the observer B. With these images, the observer B can observe a stereoscopic image as shown in FIG. 11C.
  • the rotation control unit 132 of the video rotation unit 130 acquires the side determined by the determination unit 110 from the video control unit 120 as a processing target side, and determines whether there is an unprocessed side (S304). ). When the rotation control unit 132 determines that there is an unprocessed side (Yes in S304), the rotation control unit 132 selects one of the unprocessed sides (S306).
  • the rotation control unit 132 determines the rotation angle of the selected unprocessed side (S308).
  • the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “zero rotation angle” (“rotation angle zero” in S308), the rotation control unit 132 determines again whether there is an unprocessed side. (S304).
  • the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “rotation angle 180 degrees” (“rotation angle 180 degrees” in S308), the rotation control unit 132 rotates the input image by 180 degrees (S310). It is determined again whether there is an unprocessed side (S304).
  • FIGS. 12A to 13B are diagrams for explaining processing in which the rotation control unit 132 according to the first embodiment of the present invention rotates the input video.
  • the input video for the observer B is displayed on the display screen 600 so that the sides v1-v2 are the bottom sides.
  • the observer B observes from the opposite side of the observer A.
  • FIG. 12A it is necessary to rotate the image for the left eye of the viewer B by 180 degrees and display it on the display screen 600 so that the sides v3-v4 are the bottom sides.
  • FIG. 12B it is necessary to rotate the image for the right eye of the viewer B by 180 degrees and display it on the display screen 600 so that the sides v3-v4 are the bottom sides.
  • the rotation control unit 132 rotates the video as shown in FIG. 13B by changing the pixel value of each pixel in the video to a value when the video is rotated 180 degrees. Can be made.
  • the rotation control unit 132 converts the pixel value of the pixel at the coordinates (x, y) to the position of the coordinates (hx, vy).
  • x and y are the horizontal and vertical positions of the pixels, respectively
  • h and v are the numbers of pixels in the horizontal and vertical directions, respectively.
  • the rotation control unit 132 repeats the above processing until it is determined that there is no unprocessed side. When it is determined that there is no unprocessed side (No in S304), the process ends. To do.
  • FIG. 14 is a flowchart for explaining processing in which the video output unit 140 according to Embodiment 1 of the present invention outputs a rotated video.
  • FIG. 15 is a diagram for explaining processing in which the video output unit 140 according to Embodiment 1 of the present invention outputs a rotated video.
  • the video control unit 120 causes the video output unit 140 to output a left-eye video and display it on the display screen 600 (Yes in S ⁇ b> 402)
  • the viewer's shutter for the video is displayed.
  • the shutter for the left eye of the glasses is opened, and the shutter for the left eye and the right eye of the shutter glasses of another observer are closed (S404).
  • the image control unit 120 displays the shutter for the left eye of the eyeglass 20 of the observer A. And the shutters for the left eye and right eye of the eyeglasses 30 of the observer B are closed.
  • the video control unit 120 displays the shutter for the left eye of the glasses 20 of the viewer B. And the shutters for the left eye and right eye of the eyeglasses 30 of the observer A are closed.
  • the image control unit 120 displays the shutter for the right eye of the eyeglass 20 of the observer A. And the shutters for the left eye and right eye of the eyeglasses 30 of the observer B are closed.
  • the image control unit 120 displays the shutter for the right eye of the eyeglass 20 of the observer B. And the shutters for the left eye and right eye of the eyeglasses 30 of the observer A are closed.
  • the video output unit 140 alternately outputs the left-eye video and the right-eye video for the viewer A and the viewer B at regular time intervals in accordance with the instruction of the video control unit 120.
  • the side of the display screen 600 closest to the observation position of the observer is determined, and the input is performed so that the determined side becomes the bottom side.
  • a rotation image obtained by rotating the image is generated, and the generated rotation image is output, so that the rotation image is displayed on the display screen 600.
  • an image is displayed on the display screen 600 such that the side of the display screen 600 closest to the observer is the bottom side. For this reason, it is possible to observe in the original direction to be observed regardless of the observation direction.
  • the video processing apparatus 10 includes a sensor that detects a signal generated by a device used by the observer, and when the sensor detects the signal, the side of the display screen 600 that is closest to the sensor is the bottom side. Then, a rotated image obtained by rotating the input image is generated.
  • the device is a device used by the observer, it is considered that the observer is observing near the sensor that has detected the device. For this reason, the image is displayed on the display screen 600 so that the side of the display screen 600 closest to the sensor is the bottom side, so that the observer can observe the image in the original observation direction. .
  • the video processing apparatus 10 inputs a left-eye video and a right-eye video for viewing a stereoscopic video, and a rotated left-eye video obtained by rotating the left-eye video and the right-eye video. Generate and output a rotating right-eye video. As a result, a stereoscopic image can be observed in the original direction to be observed regardless of the viewing direction.
  • the video output unit 140 alternately displays the rotated left-eye video and the rotated right-eye video at regular time intervals, thereby causing the display screen 600 to alternately display the regular left-eye video and the rotated right-eye video at regular time intervals. Accordingly, by viewing the left-eye video and the right-eye video alternately using the stereoscopic video viewing glasses, it is possible to observe the stereoscopic video in the original direction to be observed.
  • the determination unit 110 determines the side of the display screen 600 that is closest to the position of the stereoscopic video viewing glasses worn by the observer, and rotates the input video so that the side becomes the bottom side. Generate video.
  • the position of the eyeglasses becomes the observation position of the observer. For this reason, the video is displayed on the display screen 600 so that the side of the display screen 600 closest to the glasses is the base, so that the observer can observe the video in the original observation direction. .
  • the eyeglasses are shutter eyeglasses
  • the video processing device 10 opens the left eye shutter of the observer's shutter glasses when the video for the left eye of the observer close to the determined side is displayed.
  • the eye shutter is closed and the left eye shutter and the right eye shutter of the shutter glasses of an observer other than the observer are closed.
  • the video processing device 10 opens the right-eye shutter of the observer's shutter glasses and closes the left-eye shutter when an image for the right eye of the observer close to the determined side is displayed.
  • the left eye shutter and the right eye shutter of the shutter glasses of the observer other than the observer are closed.
  • the video processing apparatus 10 includes a rectangular display screen 600.
  • a stereoscopic image can be observed in the original direction to be observed.
  • the video processing apparatus 10 includes a rectangular display screen 600, and includes sensors for detecting glasses near any two of the four sides of the display screen 600, and the sensors detect the glasses.
  • a rotation image is generated by rotating the input image so that the side of the display screen 600 closest to the sensor that has detected the glasses is the bottom side.
  • the observer wears the glasses, if the glasses are arranged and detected near the two sides, the position of the glasses becomes the observation position of the observer. For this reason, by displaying the video on the display screen 600 such that the side of the display screen 600 closest to the glasses is the bottom side of the two sides, the observer can observe the original direction of the video. Can be observed.
  • the video processing apparatus 10 rotates the input video for the observer at one of the observation positions corresponding to two opposite sides of the four sides of the display screen 600 by 180 degrees to generate a rotated video. To do. That is, if two observers face each other across the display screen 600, the image observed by the two observers is upside down. For this reason, by rotating the input image for one of the observers by 180 degrees, both of the two observers can observe the original image to be observed.
  • the video processing apparatus 10 acquires a video signal, decodes the acquired video signal, outputs a video, and displays the video on the display screen 600.
  • the video processing apparatus 10 can be realized as a system such as a mobile information terminal (tablet terminal) that acquires a video signal, processes it, and displays it.
  • the video control unit 120 displays the left shutter glasses worn by the observer at the observation position.
  • the eye shutter and the right eye shutter are opened, and the left eye shutter and the right eye shutter of the shutter glasses worn by an observer other than the observer at the observation position are closed.
  • the image control unit 120 opens the left eye shutter and the right eye shutter of the eyeglass 20 of the observer A, and the eyeglass 30 of the observer B Close the left and right eye shutters.
  • the image control unit 120 opens the left eye shutter and the right eye shutter of the eyeglass 20 of the observer B, and the eyeglass 30 of the observer A is opened. Close the left and right eye shutters.
  • the video rotation unit 130 rotates the input video for the viewer B, and the video output unit 140 alternately switches the video for the viewer A and the viewer B at regular time intervals. Output to.
  • the observation at the observation position corresponding to the side of the display screen 600 closest to the position of the glasses worn by the observer is performed.
  • the shutter glasses for the left eye and the right eye of the shutter glasses of the viewer at the observation position are opened and the shutter glasses of the viewer other than the viewer at the observation position are opened. Close the left-eye and right-eye shutters. As a result, it is possible to view a two-dimensional image using the shutter glasses in a direction that should originally be observed.
  • FIG. 16 is an external view showing the external appearance of the video processing apparatus 10a according to Embodiment 2 of the present invention.
  • the video processing device 10a includes a 3D glass sensor B113, a 3D glass sensor T114, a transmitter B153, and A transmitter T154 is provided. That is, the video processing apparatus 10a includes sensors for detecting glasses at positions corresponding to the four sides forming the outer peripheral edge of the display screen 600, respectively.
  • the 3D glass sensor B113 and the 3D glass sensor T114 have the same function as the 3D glass sensor L111 or the 3D glass sensor R112, and the transmitter B153 and the transmitter T154 have the same function as the transmitter L151 or the transmitter R152. Detailed description will be omitted.
  • the eyeglasses 60 are shutter eyeglasses worn by the observer C, and include a transmitter 61 and a receiver 62.
  • the glasses 70 are shutter glasses worn by the observer D, and include a transmitter 71 and a receiver 72.
  • the spectacles 60 and the spectacles 70 have the same functions as those of the spectacles 20 or the spectacles 30, and thus detailed description thereof is omitted.
  • FIG. 17 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit 100a according to Embodiment 2 of the present invention.
  • the 3D image reproduction unit 100a determines the 3D glass sensor L111, 3D glass sensor R112, 3D glass sensor B113, 3D glass sensor T114, transmitter L151, transmitter R152, transmitter B153, and transmitter T154. 110, a video control unit 120, a video rotation unit 130a, a video output unit 140, and a synchronization signal generation unit 150.
  • the video rotation unit 130a having a function different from that of the first embodiment will be described, and the other components are the same as those of the first embodiment, and thus detailed description thereof will be omitted.
  • FIG. 18 is a block diagram showing a functional configuration of the video rotation unit 130a according to Embodiment 2 of the present invention.
  • the video rotation unit 130a includes a video processing unit 134 in addition to the video input / output unit 131, the rotation control unit 132, and the memory 133 in the first embodiment.
  • the video input / output unit 131 inputs a left-eye video and a right-eye video for viewing a stereoscopic video as input video. Then, the video input / output unit 131 stores the input left-eye video and right-eye video in the memory 133.
  • the rotation control unit 132 rotates the input image for the observer by 180 degrees at the observation position corresponding to the first side when the side determined by the determination unit 110 is the first side of the four sides. In addition, when the side determined by the determination unit 110 is the second side among the four sides, the rotation control unit 132 rotates the input image for the observer at the observation position corresponding to the second side by 90 degrees to the left. . In addition, when the side determined by the determination unit 110 is the third side of the four sides, the rotation control unit 132 rotates the input image for the observer at the observation position corresponding to the third side by 90 degrees to the right. .
  • the rotation control unit 132 reads the left-eye video and the right-eye video from the memory 133 and stores the video obtained by rotating the left-eye video and the right-eye video in the memory 133.
  • the image processing unit 134 adjusts the size of the input image rotated 90 degrees by the rotation control unit 132 to the size of the display screen 600 and generates a rotated image. That is, the video processing unit 134 reads the video rotated 90 degrees from the memory 133, adjusts the size, and stores the video in the memory 133.
  • the image processing unit 134 converts the size of the input image rotated by 90 degrees into a letterbox that reduces the entire image so that the entire image is displayed on the display screen 600, thereby reducing the size of the display screen 600. adjust.
  • the video input / output unit 131 reads the letterboxed video from the memory 133 and outputs it to the video output unit 140.
  • the video rotation unit 130 does not include the rotation control unit 132 and the video processing unit 134, and rotates the left-eye video and the right-eye video when the video input / output unit 131 reads the video from the memory 133.
  • the video whose size has been adjusted may be read out and output to the video output unit 140.
  • FIG. 19 is a flowchart for explaining a process in which the video rotation unit 130a according to Embodiment 2 of the present invention generates a rotated video.
  • FIG. 20 is a diagram for explaining processing in which the video rotation unit 130a according to Embodiment 2 of the present invention generates a rotated video.
  • the video input / output unit 131 of the video rotation unit 130a inputs an input video (S302), and the rotation control unit 132 determines whether there is an unprocessed side (S304). If it is determined that there is an unprocessed side (Yes in S304), one of the unprocessed sides is selected (S306). Since these processes (S302 to S306) are the same as the processes (S302 to S306) in the first embodiment, detailed description thereof is omitted.
  • the rotation control unit 132 determines the rotation angle of the selected unprocessed side (S312).
  • the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “zero rotation angle” (“rotation angle zero” in S312)
  • the rotation control unit 132 determines again whether there is an unprocessed side. (S304).
  • the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “rotation angle left 90 degrees” (“rotation angle left 90 degrees” in S312), the rotation control unit 132 sets the input image to 90 leftward. (S314).
  • the image processing unit 134 adjusts the size of the input image rotated 90 degrees leftward to the size of the display screen 600 by letterboxing (S316), and again determines whether there is an unprocessed side. Judgment is made (S304).
  • the video in FIG. 20A is rotated 90 degrees counterclockwise to generate the video in FIG. 20B, and the video in FIG. The letter box is generated to generate the image shown in FIG.
  • the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “rotation angle right 90 degrees” (“rotation angle right 90 degrees” in S312), the rotation control unit 132 sets the input image to the right 90. (S318).
  • the video processing unit 134 adjusts the size of the input video rotated 90 degrees rightward to the size of the display screen 600 by letterboxing (S320), and again determines whether there is an unprocessed side. Judgment is made (S304).
  • the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “rotation angle 180 degrees” (“rotation angle 180 degrees” in S312), the rotation control unit 132 rotates the input image by 180 degrees (S322). It is determined again whether there is an unprocessed side (S304).
  • the rotation control unit 132 repeats the above process until it is determined that there is no unprocessed side.
  • the process is terminated.
  • FIG. 21 is a diagram for explaining processing in which the video output unit 140 according to Embodiment 2 of the present invention outputs video.
  • the video output unit 140 sequentially outputs the left-eye video 1 that is the left-eye video of the viewers A, B, C, and D, and then the viewers A, B, and C And the right eye image 1 which is the right eye image of D and D are sequentially output.
  • the video output unit 140 then sequentially outputs the left-eye video 2 that is the left-eye video of the viewers A, B, C, and D, and then the viewers A, B, C, and D
  • the right-eye video 2 that is the right-eye video is sequentially output.
  • the video output unit 140 outputs the left-eye video 3 or the left-eye video after the right-eye video 1 instead of the left-eye video 2. You may decide to do it.
  • the rectangular display screen 600 and the sensors for detecting glasses are provided near the four sides of the display screen 600, and the glasses are detected. If the side of the display screen 600 closest to the sensor is the first of the four sides, the observer's input image at the observation position near the first side is rotated 180 degrees, and the side is the second side. In this case, the input image for the observer at the observation position near the second side is rotated 90 degrees to the left, and when the side is the third side, the input for the observer at the observation position near the third side is performed. The image is rotated 90 degrees to the right to generate a rotated image.
  • the video for one other person is 180 degrees, 90 degrees left, and 90 degrees right. It is necessary to rotate it. Therefore, by rotating the input image for the observer to 180 degrees, 90 degrees to the left, and 90 degrees to the right, all three observers can observe the original image in the direction to be observed.
  • the video processing apparatus 10a adjusts the size of the input video rotated 90 degrees to the size of the display screen 600, and generates a rotated video. That is, when the display screen 600 is rectangular, when the video is rotated 90 degrees, the rotated video has a size different from the size of the display screen 600. For this reason, by adjusting the image after rotation to the size of the display screen 600, the observer can observe the image.
  • the video processing apparatus 10a performs letterboxing that reduces the size of the input video rotated 90 degrees so that the entire video is displayed on the display screen 600.
  • the viewer can observe the entire video by adjusting the size of the video that has been rotated to a size different from the size of the display screen 600 by letterboxing.
  • the video processing apparatus includes sensors for detecting glasses at positions corresponding to any three of the four sides that form the outer periphery of the display screen 600.
  • the video rotation unit rotates the input video for the observer at the observation position corresponding to the first side by 180 degrees
  • the determination unit When the determined side is the second side of the three sides, the input video for the observer at the observation position corresponding to the second side is rotated by 90 degrees to generate a rotated video.
  • each of the three observers when each of the three observers is observing from the three sides of the display screen 600, it is for the observer. By rotating the input image by 180 degrees and 90 degrees, all three observers can observe the original image in the direction to be observed.
  • the video processing apparatus is an apparatus that observes a stereoscopic video using shutter glasses.
  • the video processing apparatus is a naked eye lenticular system apparatus.
  • FIG. 22 is an external view showing the external appearance of the video processing apparatus 10b according to Embodiment 3 of the present invention.
  • the video processing apparatus 10b includes a display screen 600, an optical lens 11, a controller sensor L111a, and a controller sensor R112a.
  • the video processing apparatus 10b includes a controller sensor L111a and a controller sensor R112a instead of the 3D glass sensor L111 and the 3D glass sensor R112 of the video processing apparatus 10 in the first embodiment, but the other configurations are the same as those in the above embodiment. Since it is the same as that of form 1, detailed description is abbreviate
  • the controller 80 is a control device used by the observer A to control the video processing device 10b, and includes a transmitter 81 and an operation button 82.
  • the controller 90 is a control device used by the observer B to control the video processing device 10b, and includes a transmitter 91 and operation buttons 92.
  • the controller 80 and the controller 90 may be a remote controller for selecting a video displayed on the display screen 600 or a controller for a game. Further, the controller 80 and the controller 90 may be connected to the video processing device 10b by wire.
  • the display screen 600 is a rectangular screen that displays an input video. That is, the display screen 600 has four sides on the outer periphery.
  • the controller sensor L111a and the controller sensor R112a are sensors that detect signals generated by the controller 80 or 90 used by the observer. As shown in FIG. 22, if the controller 80 is near the control sensor L111a, the control sensor L111a detects a signal emitted by the controller 80. If the controller 90 is near the control sensor R112a, the controller 90 emits the control sensor R112a. Detect the signal.
  • the optical lens 11 is an optical lens disposed on the observation side upper surface of the display screen 600.
  • FIG. 23 is an external view showing the external appearance of the optical lens 11 according to Embodiment 3 of the present invention.
  • the optical lens 11 has directions of two opposite sides of the four sides forming the outer peripheral edge of the display screen 600 ("observer A direction” and "observer B direction” in the figure). ) To a sheet-like lenticular lens (hereinafter referred to as a lenticular sheet) that refracts an image so that one image is observed.
  • a sheet-like lenticular lens hereinafter referred to as a lenticular sheet
  • the optical lens 11 divides and displays a composite image displayed in a plurality of areas of the display screen 600 from one side of the display screen 600 as one image of the plurality of images before the division. Refract so that is observed.
  • the composite video is a composite video obtained by dividing and synthesizing a plurality of videos including the rotated video generated by the video rotation unit 130 into a plurality of areas, and is output by the video output unit 140 and displayed on the display screen 600. Is displayed.
  • FIG. 24 is a diagram illustrating a process in which the video processing apparatus 10b according to the third embodiment of the present invention displays a video on the display screen 600.
  • the determination unit 110 determines the side of the display screen 600 closest to the detected sensor as the side of the display screen 600 closest to the observation position. For example, when the controller sensor L111a detects a signal generated by the controller 80, the determination unit 110 determines the side of the display screen 600 that is closest to the controller sensor L111a.
  • the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 becomes the bottom side as an image for the observer using the controller.
  • the controller sensor L111a detects a signal emitted from the controller 80
  • the video rotation unit 130 rotates the input video shown in (a1) of FIG.
  • the rotated image shown in (b1) of FIG. Note that the image for the viewer B shown in (a2) of the figure is not rotated and is directly input to the video output unit 140 as the video shown in (b2) of the same figure.
  • the video output unit 140 synthesizes the video shown in (b1) of the figure and the video shown in (b2) of the figure, and generates a synthesized video shown in (c) of the figure.
  • the optical lens 11 allows the observer A to observe the image shown in (d1) of the figure, and the observer B observes the image shown in (d2) of the figure. it can.
  • the video processing device 10b includes the display screen 600 and the optical lens 11, and the optical lens 11 is divided into a plurality of areas of the display screen 600 for display.
  • the synthesized image is refracted from one side of the display screen 600 so that one of the plurality of images before division is observed. That is, by using a lenticular lens as the optical lens 11, one image can be observed from one side of the display screen 600, so that the image is observed in the original observation direction regardless of the observation direction. can do.
  • the display screen 600 is rectangular, and the optical lens 11 refracts an image so that one image of a plurality of images can be observed from each of two opposite sides of the four sides of the display screen 600. Let For this reason, when an observer observes from the direction of the two sides, the image can be observed in the original direction to be observed.
  • the video processing apparatus 10b includes a sensor that detects a signal generated by a controller used by the observer, and rotates the input video so that the side of the display screen 600 closest to the sensor that has detected the controller is the bottom side. Generate rotating images.
  • the said controller is an apparatus which an observer uses, it is thought that the observer is observing near the sensor which detected the said controller. For this reason, the image is displayed on the display screen 600 so that the side of the display screen 600 closest to the sensor is the bottom side, so that the observer can observe the image in the original observation direction. .
  • FIG. 25 is an external view showing the external appearance of a video processing apparatus 10c according to Modification 1 of Embodiment 3 of the present invention.
  • the video processing apparatus 10c includes an optical lens 12 instead of the optical lens 11 of the video processing apparatus 10b in the third embodiment. Since the other configuration of the video processing device 10c is the same as the configuration of the video processing device 10b in the third embodiment, detailed description thereof is omitted.
  • the optical lens 12 is an optical lens in which two lenticular sheets are stacked vertically and horizontally. That is, the optical lens 12 refracts the image so that the left eye image and the right eye image of the observer A and the left eye image and the right eye image of the observer B can be observed. Can be made.
  • FIG. 26 is a diagram illustrating a process in which the video processing apparatus 10c according to the first modification of the third embodiment of the present invention displays a video on the display screen 600.
  • the video rotation unit 130 converts the input video shown in (a1) of FIG. The rotated image shown in (b1) of FIG. Similarly, the image rotation unit 130 generates a rotation image shown in (b2) of the figure by rotating the input image shown in (a2) of the figure by 180 degrees as an image for the left eye of the observer A. To do.
  • the video output unit 140 synthesizes the videos shown in (b1) to (b4) of the figure and generates a synthesized video shown in (c) of the figure.
  • the observer A and the observer B can observe the stereoscopic image by the optical lens 12.
  • a game such as table tennis can be displayed on the display screen 600, and the viewer A and the viewer B can observe the game in a stereoscopic image.
  • FIG. 27 is an external view showing the external appearance of a video processing apparatus 10d according to Modification 2 of Embodiment 3 of the present invention.
  • the video processing apparatus 10d includes a controller sensor B113a and a controller sensor T114a in addition to the controller sensor L111a and the controller sensor R112a in the third embodiment. That is, the video processing device 10d includes sensors that detect the controller at positions corresponding to the four sides forming the outer peripheral edge of the display screen 600, respectively.
  • controller sensor B113a and the controller sensor T114a have the same functions as the controller sensor L111a or the controller sensor R112a, detailed description thereof is omitted.
  • the controller 93 is a controller used by the observer C
  • the controller 94 is a controller used by the observer D.
  • the controller 93 and the controller 94 have the same functions as the controller 80 or the controller 90, detailed description thereof is omitted.
  • the video processing device 10d includes the same optical lens as the optical lens 12 of the video processing device 10c in the first modification of the third embodiment.
  • the optical lens 12 is a lenticular sheet that refracts an image so that one image of the plurality of images is observed from each of the four sides forming the outer peripheral edge of the display screen 600.
  • the display screen 600 is rectangular, and the optical lens 11 is viewed from each of the four sides of the display screen 600.
  • the image is refracted so that one of the plurality of images is observed. For this reason, even when the observer observes from any direction of the four sides, the image can be observed in the original observation direction.
  • the video processing apparatus determines the side of the display screen closest to the observer's observation position using a sensor such as a 3D glass sensor or a controller sensor.
  • the video processing apparatus determines the side of the display screen that is closest to the observation position by imaging the observer with the built-in camera.
  • the video processing apparatus 10e of the fourth embodiment has the same configuration except for the stereoscopic video reproduction unit 100 of the video processing apparatus 10 of the first embodiment, the video processing in the fourth embodiment will be described below.
  • the stereoscopic video reproduction unit 100b included in the device 10e will be described in detail.
  • FIG. 28 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit 100b according to Embodiment 4 of the present invention.
  • the stereoscopic video reproduction unit 100b includes a determination unit 110, a video control unit 120, a video rotation unit 130, a video output unit 140, a camera 160, and a person recognition unit 170.
  • the determination unit 110, the video control unit 120, the video rotation unit 130, and the video output unit 140 are the determination unit 110, the video control unit 120, the video rotation unit 130, and the video included in the stereoscopic video reproduction unit 100 in the first embodiment. Since it has the same function as the output unit 140, detailed description thereof is omitted or simplified.
  • the camera 160 is a built-in camera that captures an image.
  • the person recognition unit 170 recognizes a person shown in the image captured by the camera 160 and detects it as an observer.
  • the determination unit 110 determines the side of the display screen 600 closest to the observer detected by the person recognition unit 170 as the side of the display screen 600 closest to the observer's observation position.
  • the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 becomes the bottom as the image for the observer detected by the person recognition unit 170.
  • FIG. 29 is a flowchart for explaining processing in which the stereoscopic video reproduction unit 100b according to Embodiment 4 of the present invention discriminates the sides of the display screen 600.
  • FIGS. 30 and 31 are diagrams for explaining processing in which the stereoscopic video reproduction unit 100b according to Embodiment 4 of the present invention determines the sides of the display screen 600.
  • the camera 160 captures an image (S502).
  • a camera 160 is disposed in the vicinity of the side v2-v3 of the video processing apparatus 10e, and the camera 160 captures an image including the observer A and the observer B.
  • the observer A is observing from the side v1-v2
  • the observer B is observing from the side v3-v4.
  • An image taken by the camera 160 is shown in FIG.
  • the person recognition unit 170 recognizes a person in the image captured by the camera 160 and detects it as an observer (S504). Specifically, as shown in FIG. 31, the person recognition unit 170 recognizes that two persons (observers) are shown in an image captured by the camera 160.
  • the person recognizing unit 170 identifies the observer in the adjacent side v3-v4 region (S506). Specifically, as shown in FIG. 31, the person recognition unit 170 recognizes that the recognized observer is reflected in the adjacent side v3-v4 region, which is the right region of the image captured by the camera 160. . In the same figure, the person recognizing unit 170 recognizes the observer B shown in the adjacent side v3-v4 region.
  • the person recognizing unit 170 identifies the observer in the adjacent side v1-v2 region (S508). Specifically, as shown in FIG. 31, the person recognition unit 170 recognizes that the recognized observer is reflected in the adjacent side v1-v2 region that is the left region of the image captured by the camera 160. . In the figure, the person recognizing unit 170 recognizes the observer A reflected in the adjacent side v1-v2 region.
  • the determination unit 110 determines the side of the display screen 600 closest to the observer detected by the person recognition unit 170 as the side of the display screen 600 closest to the observation position of the observer (S510). That is, the determination unit 110 determines the side v1-v2 as the side of the display screen 600 closest to the observation position of the observer A, and sets the side v3-v4 as the side of the display screen 600 closest to the observation position of the observer B. Determine.
  • the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 becomes the bottom as the image for the observer detected by the person recognition unit 170, and outputs the image.
  • the unit 140 outputs the rotated video generated by the video rotating unit 130.
  • the person captured in the image captured by the camera 160 is recognized and detected as an observer, and the detected observer is the most.
  • the side of the near display screen 600 is determined as the side of the display screen 600 closest to the observer's observation position, and the input image is rotated. That is, by imaging the observer using the camera 170, the video is displayed on the display screen 600 so that the side of the display screen 600 closest to the observer becomes the bottom side. For this reason, it is possible to observe in the original direction to be observed regardless of the observation direction.
  • the video processing device in the modification of the fourth embodiment has the same configuration as the video processing device 10e in the fourth embodiment, description of the configuration is omitted.
  • FIG. 32 is a flowchart for explaining a process in which the stereoscopic video reproduction unit 100b according to the modification of the fourth embodiment of the present invention discriminates the sides of the display screen 600.
  • 33 and 34 are diagrams for explaining processing in which the stereoscopic video reproduction unit 100b according to the modification of the fourth embodiment of the present invention discriminates the sides of the display screen 600.
  • the camera 160 captures an image (S602).
  • a camera 160 is disposed in the vicinity of the side v2-v3 of the video processing apparatus, and the camera 160 includes an observer A, an observer B, an observer C, and an observer.
  • An image including D is captured.
  • the observer A observes from the side v1-v2
  • the observer B observes from the side v3-v4 side
  • the observer C observes from the side v1-v4 side.
  • the observer D is observing from the side v2-v3.
  • An image captured by the camera 160 is shown in FIG.
  • the person recognizing unit 170 recognizes a person in the image captured by the camera 160 and detects it as an observer (S604). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that four persons (observers) are shown in the image captured by the camera 160.
  • the person recognition unit 170 identifies the observer in the adjacent side v2-v3 region (S606). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that the recognized observer is reflected in the adjacent side v2-v3 area, which is the lower area of the image captured by the camera 160. To do. In the figure, the person recognizing unit 170 recognizes the observer D shown in the adjacent side v2-v3 region.
  • the person recognizing unit 170 identifies an observer in the adjacent side v3-v4 region (S608). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that the recognized observer is reflected in the adjacent side v3-v4 region that is the right region of the image captured by the camera 160. . In the same figure, the person recognizing unit 170 recognizes the observer B shown in the adjacent side v3-v4 region.
  • the person recognizing unit 170 identifies the observer in the adjacent side v1-v2 region (S610). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that the recognized observer is reflected in the adjacent side v1-v2 region that is the left region of the image captured by the camera 160. . In the figure, the person recognizing unit 170 recognizes the observer A reflected in the adjacent side v1-v2 region.
  • the person recognizing unit 170 identifies the observer in the adjacent side v1-v4 region (S612). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that the recognized observer is reflected in the adjacent side v1-v4 area, which is the upper area of the image captured by the camera 160. . In the figure, the person recognizing unit 170 recognizes the observer C shown in the adjacent side v1-v4 region.
  • the determination unit 110 determines the side of the display screen 600 closest to the observer detected by the person recognition unit 170 as the side of the display screen 600 closest to the observation position of the observer (S614). That is, the determination unit 110 determines the side v1-v2 as the side of the display screen 600 closest to the observation position of the observer A, and sets the side v3-v4 as the side of the display screen 600 closest to the observation position of the observer B. The side v1-v4 is determined as the side of the display screen 600 closest to the observation position of the observer C, and the side v2-v3 is determined as the side of the display screen 600 closest to the observation position of the observer D.
  • the determination unit 110 can determine the side of the display screen 600 that is closest to the observation position of the observer.
  • FIG. 35 is a diagram showing a minimum configuration of a video processing apparatus according to an embodiment of the present invention and a modification thereof. That is, as shown in the figure, the video processing apparatus 10f with the minimum configuration only needs to include the determination unit 110, the video rotation unit 130, and the video output unit 140.
  • the present invention can be realized not only as such a video processing apparatus, but also as a video processing method in which processing of a characteristic processing unit included in the video processing apparatus is used as a step.
  • Each processing unit included in the video processing apparatus according to the present invention may be realized as an LSI (Large Scale Integration) that is an integrated circuit.
  • LSI Large Scale Integration
  • it can be realized as an integrated circuit including the components of the video processing device 10f shown in FIG.
  • Each processing unit included in the integrated circuit may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
  • the name used here is LSI, but it may also be called IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
  • the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • the present invention can also be realized as a program that causes a computer to execute characteristic processing included in the video processing method.
  • a program can be distributed via a recording medium such as a CD-ROM and a transmission medium such as the Internet.
  • the present invention can be applied to an image processing apparatus such as a tablet terminal that can observe an image in an original direction to be observed regardless of the direction of observation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Provided is a video processing device (10) for processing input video so that an observer can view the input video displayed on a display screen (600), wherein the video processing device (10) is provided with an assessment unit (110) to identify which side of the display screen (600) is closest to the viewing position of the observer, a video rotation unit (130) to generate rotated video in which the input video is rotated such that the side identified by the assessment unit (110) is the bottom side, and a video output unit (140) to output the generated rotated video.

Description

映像処理装置及び映像処理方法Video processing apparatus and video processing method
 本発明は、映像処理装置および映像処理方法に関し、特に、表示画面に表示される入力映像を観察者が観察するために入力映像の処理を行う映像処理装置および映像処理方法に関する。 The present invention relates to a video processing device and a video processing method, and more particularly to a video processing device and a video processing method for processing an input video so that an observer can observe the input video displayed on a display screen.
 従来、複数の映像を1つの表示画面で同時に観察し得る技術が提案されている(例えば、特許文献1及び2参照)。これら従来の技術はいずれも、1つの立体視用の映像を左眼用映像と右眼用映像とに分け、この左眼用映像と右眼用映像とを1つの表示画面に交互に表示し、この表示に同期して動作するシャッタを備えた眼鏡を用いて観察する技術を応用するものである。 Conventionally, a technique capable of simultaneously observing a plurality of videos on one display screen has been proposed (see, for example, Patent Documents 1 and 2). In each of these conventional technologies, one stereoscopic video is divided into a left-eye video and a right-eye video, and the left-eye video and the right-eye video are alternately displayed on one display screen. A technique for observing using glasses equipped with a shutter that operates in synchronization with this display is applied.
 具体的には、上記従来の技術は、複数の映像を順に表示しながら、ある映像を表示している間にはこの映像を観察しようとする観察者が装着した眼鏡の両眼または片眼のシャッタを開き、その他の観察者が装着した眼鏡の両眼のシャッタを閉じるという処理を順次行うことによって、複数の映像を複数の観察者が同時に観察できるようにしたものである。 Specifically, the above-mentioned conventional technique displays a plurality of images in order, and while displaying a certain image, both eyes or one eye of glasses worn by an observer who wants to observe this image are displayed. By sequentially performing the process of opening the shutter and closing the shutters of both eyes of the glasses worn by other observers, a plurality of observers can observe a plurality of images at the same time.
特開平6-175631号公報Japanese Patent Application Laid-Open No. 6-175631 特開平10-260377号公報Japanese Patent Laid-Open No. 10-260377
 しかしながら、従来の技術では、タブレット端末のように、複数の観察者が異なる方向から映像を観察することが可能な映像再生装置においては、観察する方向によっては映像本来の観察すべき向きで観察することができないという問題がある。 However, in the conventional technology, in a video reproduction apparatus that allows a plurality of observers to observe an image from different directions, such as a tablet terminal, the image is observed in the original observation direction depending on the observation direction. There is a problem that can not be.
 本発明は、上記問題を解決するためになされたものであり、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができる映像処理装置及び映像処理方法を提供することを目的とする。 The present invention has been made to solve the above-described problem, and an object of the present invention is to provide a video processing apparatus and a video processing method capable of observing in an original direction to be observed regardless of the viewing direction. And
 上記目的を達成するために、本発明の一態様に係る映像処理装置は、表示画面に表示される入力映像を観察者が観察するために前記入力映像の処理を行う映像処理装置であって、前記観察者の観察位置に最も近い前記表示画面の辺を判別する判別部と、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する映像回転部と、生成された前記回転映像を出力する映像出力部とを備える。 In order to achieve the above object, a video processing apparatus according to an aspect of the present invention is a video processing apparatus that processes an input video so that an observer can observe the input video displayed on a display screen. A discriminating unit that discriminates the side of the display screen that is closest to the observation position of the observer, and a video rotating unit that generates a rotated video obtained by rotating the input video so that the side discriminated by the discriminating unit becomes a base And a video output unit that outputs the generated rotated video.
 これによれば、映像処理装置は、観察者の観察位置に最も近い表示画面の辺を判別し、判別した辺が底辺となるように、入力映像を回転させた回転映像を生成し、生成した回転映像を出力することで、表示画面に回転映像を表示させる。これにより、観察者に最も近い表示画面の辺が底辺となるように、映像が表示画面に表示される。このため、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができる。 According to this, the video processing apparatus determines the side of the display screen that is closest to the observer's observation position, and generates and generates a rotated video by rotating the input video so that the determined side is the bottom side. By outputting the rotation image, the rotation image is displayed on the display screen. Thus, the video is displayed on the display screen such that the side of the display screen closest to the observer is the bottom side. For this reason, it is possible to observe in the original direction to be observed regardless of the observation direction.
 また、好ましくは、さらに、前記観察者が用いるデバイスが発する信号を検知するセンサを備え、前記判別部は、前記デバイスが発する信号を前記センサが検知した場合、前記センサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、前記映像回転部は、前記デバイスを用いる観察者用の映像として、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する。 Preferably, the display device further includes a sensor that detects a signal emitted from a device used by the observer, and the determination unit detects the signal emitted from the device when the sensor detects the signal emitted from the device. A side is determined as the side of the display screen closest to the observation position, and the video rotation unit is an image for an observer using the device, so that the side determined by the determination unit is a bottom side. A rotated image is generated by rotating the input image.
 これによれば、映像処理装置は、観察者が用いるデバイスが発する信号を検知するセンサを備え、当該センサが当該信号を検知した場合に、当該センサに最も近い表示画面の辺が底辺となるように、入力映像を回転させた回転映像を生成する。ここで、当該デバイスは観察者が用いる機器であるので、観察者は、当該デバイスを検知したセンサの近くで観察していると考えられる。このため、当該センサに最も近い表示画面の辺が底辺となるように、映像が表示画面に表示されることで、当該観察者は、映像本来の観察すべき向きで観察することができる。 According to this, the video processing apparatus includes a sensor that detects a signal emitted from a device used by an observer, and when the sensor detects the signal, the side of the display screen that is closest to the sensor is the bottom side. In addition, a rotation image obtained by rotating the input image is generated. Here, since the device is a device used by the observer, it is considered that the observer is observing near the sensor that has detected the device. For this reason, the image can be displayed on the display screen so that the side of the display screen closest to the sensor is the base, so that the observer can observe the image in the original observation direction.
 また、好ましくは、前記映像回転部は、前記入力映像として、立体映像を視聴するための左眼用の映像である左眼用映像と右眼用の映像である右眼用映像とを入力し、前記回転映像として、前記左眼用映像と前記右眼用映像とを回転させた回転左眼用映像と回転右眼用映像とを生成し、前記映像出力部は、生成された前記回転左眼用映像及び前記回転右眼用映像を、前記観察者用の左眼用の映像及び右眼用の映像として出力する。 Preferably, the video rotation unit inputs, as the input video, a left-eye video that is a left-eye video for viewing a stereoscopic video and a right-eye video that is a right-eye video. Generating a rotated left-eye image and a rotated right-eye image obtained by rotating the left-eye image and the right-eye image as the rotated image, and the image output unit generates the generated rotated left-eye image. The eye image and the rotated right eye image are output as the left eye image and right eye image for the observer.
 これによれば、映像処理装置は、立体映像を視聴するための左眼用映像と右眼用映像とを入力し、当該左眼用映像と右眼用映像とを回転させた回転左眼用映像と回転右眼用映像とを生成し、出力する。これにより、観察する方向にかかわらず、映像本来の観察すべき向きで、立体映像を観察することができる。 According to this, the video processing apparatus inputs a left-eye video and a right-eye video for viewing a stereoscopic video, and rotates the left-eye video and the right-eye video. Generate and output video and rotating right-eye video. As a result, a stereoscopic image can be observed in the original direction to be observed regardless of the viewing direction.
 また、好ましくは、前記映像出力部は、前記回転左眼用映像と前記回転右眼用映像とを、一定の時間間隔で交互に出力することで、当該一定の時間間隔で前記表示画面に交互に表示させる。 Preferably, the video output unit alternately outputs the rotating left-eye video and the rotating right-eye video at a constant time interval, so that the video image is alternately displayed on the display screen at the constant time interval. To display.
 これによれば、映像出力部は、左眼用の映像である回転左眼用映像と右眼用の映像である回転右眼用映像とを、一定の時間間隔で交互に出力することで、当該一定の時間間隔で表示画面に交互に表示させる。これにより、立体映像視聴用の眼鏡を用いて左眼用の映像と右眼用の映像とを交互に視聴することで、映像本来の観察すべき向きで、立体映像を観察することができる。 According to this, the video output unit alternately outputs the rotation left-eye image that is the left-eye image and the rotation right-eye image that is the right-eye image at regular time intervals, It is alternately displayed on the display screen at the fixed time interval. Accordingly, by viewing the left-eye video and the right-eye video alternately using the stereoscopic video viewing glasses, it is possible to observe the stereoscopic video in the original direction to be observed.
 また、好ましくは、前記判別部は、前記観察者が装着している前記立体映像の視聴用の眼鏡の位置に最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別する。 Preferably, the determination unit uses the side of the display screen closest to the position of the glasses for viewing the stereoscopic image worn by the observer as the side of the display screen closest to the observation position. Determine.
 これによれば、判別部は、観察者が装着している立体映像視聴用の眼鏡の位置に最も近い表示画面の辺を判別し、当該辺が底辺となるように、入力映像を回転させた回転映像を生成する。ここで、当該眼鏡は観察者が装着しているので、当該眼鏡の位置が観察者の観察位置となる。このため、当該眼鏡に最も近い表示画面の辺が底辺となるように、映像が表示画面に表示されることで、当該観察者は、映像本来の観察すべき向きで観察することができる。 According to this, the discriminating unit discriminates the side of the display screen that is closest to the position of the stereoscopic video viewing glasses worn by the observer, and rotated the input video so that the side becomes the bottom side. Generate rotating images. Here, since the eyeglasses are worn by the observer, the position of the eyeglasses becomes the observation position of the observer. For this reason, the image is displayed on the display screen so that the side of the display screen closest to the glasses is the base, so that the observer can observe the image in the original direction to be observed.
 また、好ましくは、前記眼鏡は、前記表示画面に左眼用の映像と右眼用の映像とが表示される時刻に同期して、左眼用シャッタと右眼用シャッタとを開閉するシャッタ眼鏡であり、前記映像処理装置は、さらに、複数の観察者が装着している各々のシャッタ眼鏡の左眼用シャッタと右眼用シャッタの開閉を制御する映像制御部を備え、前記映像制御部は、前記判別部が判別した辺に対応した観察位置における観察者用の左眼用の映像が前記表示画面に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタを開き、当該観察位置における観察者が装着しているシャッタ眼鏡の右眼用シャッタを閉じるとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じ、前記判別部が判別した辺に対応した観察位置における観察者用の右眼用の映像が前記表示画面に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の右眼用シャッタを開き、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタを閉じるとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じる。 Preferably, the glasses are shutter glasses that open and close the left-eye shutter and the right-eye shutter in synchronization with the time when the left-eye video and the right-eye video are displayed on the display screen. The video processing apparatus further includes a video control unit that controls opening and closing of the left-eye shutter and the right-eye shutter of each shutter glasses worn by a plurality of observers, and the video control unit includes: When the left eye image for the observer at the observation position corresponding to the side determined by the determination unit is displayed on the display screen, the left eye of the shutter glasses worn by the observer at the observation position The shutter for the left eye of the shutter glasses worn by an observer other than the observer at the observation position, and the shutter for the right eye of the shutter glasses worn by the observer at the observation position is closed. When the right eye image for the observer at the observation position corresponding to the side determined by the determination unit is displayed on the display screen, the observer at the observation position wears the The shutter for the right eye of the shutter glasses being opened, the shutter for the left eye of the shutter glasses worn by the observer at the observation position being closed, and the shutter being worn by an observer other than the observer at the observation position The left eye shutter and right eye shutter of the glasses are closed.
 これによれば、眼鏡はシャッタ眼鏡であり、映像処理装置は、判別した辺に近い観察者の左眼用の映像が表示される場合に、当該観察者のシャッタ眼鏡の左眼用シャッタを開いて右眼用シャッタを閉じるとともに、当該観察者以外の観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを閉じる。また、映像処理装置は、判別した辺に近い観察者の右眼用の映像が表示される場合に、当該観察者のシャッタ眼鏡の右眼用シャッタを開いて左眼用シャッタを閉じるとともに、当該観察者以外の観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを閉じる。これにより、立体映像視聴用のシャッタ眼鏡を用いて左眼用の映像と右眼用の映像とを交互に視聴することで、映像本来の観察すべき向きで、立体映像を観察することができる。 According to this, the eyeglasses are shutter eyeglasses, and the image processing device opens the left eye shutter of the observer's shutter glasses when the image for the left eye of the observer close to the determined side is displayed. The right eye shutter is closed, and the left eye shutter and right eye shutter of the shutter glasses of the observer other than the observer are closed. In addition, when an image for the right eye of the observer close to the determined side is displayed, the image processing device opens the right eye shutter of the shutter glasses of the observer and closes the left eye shutter. The left eye shutter and the right eye shutter of the shutter glasses of the observer other than the observer are closed. Thus, by viewing the left-eye video and the right-eye video alternately using the shutter glasses for stereoscopic video viewing, it is possible to observe the stereoscopic video in the original observation direction. .
 また、好ましくは、前記判別部は、前記観察者が装着している眼鏡の位置に最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、前記眼鏡は、左眼用シャッタと右眼用シャッタとを開閉するシャッタ眼鏡であり、前記映像処理装置は、さらに、複数の観察者が装着している各々のシャッタ眼鏡の左眼用シャッタと右眼用シャッタの開閉を制御する映像制御部を備え、前記映像制御部は、前記判別部が判別した辺に対応した観察位置における観察者用の映像が前記表示画面に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを開くとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じる。 Preferably, the determination unit determines the side of the display screen that is closest to the position of the glasses worn by the observer as the side of the display screen that is closest to the observation position. Shutter glasses for opening and closing a left eye shutter and a right eye shutter, and the video processing device further includes a left eye shutter and a right eye shutter of each shutter glasses worn by a plurality of observers. An image control unit that controls opening and closing, and the image control unit is configured to perform observation at the observation position when an image for the observer at the observation position corresponding to the side determined by the determination unit is displayed on the display screen. The left eye shutter and the right eye shutter of the shutter glasses worn by the person are opened, and the left eye shutter and the right eye of the shutter glasses worn by an observer other than the observer at the observation position Close the shutter.
 これによれば、眼鏡はシャッタ眼鏡であり、映像処理装置は、観察者が装着している眼鏡の位置に最も近い表示画面の辺に対応した観察位置における観察者用の映像が表示画面に表示される場合に、当該観察位置における観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを開くとともに、当該観察位置における観察者以外の観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを閉じる。これにより、シャッタ眼鏡を用いて、映像本来の観察すべき向きで、2次元映像を視聴することができる。 According to this, the eyeglasses are shutter eyeglasses, and the video processing device displays an image for the observer at the observation position corresponding to the side of the display screen closest to the position of the eyeglasses worn by the observer on the display screen. In this case, the left eye shutter and right eye shutter of the shutter glasses of the observer at the observation position are opened, and the left eye shutter and right eye shutter of the shutter glasses of the observer other than the observer at the observation position are opened. close. As a result, it is possible to view a two-dimensional image using the shutter glasses in a direction that should originally be observed.
 また、好ましくは、さらに、長方形の前記表示画面を備える。 Also preferably, the display screen having a rectangular shape is further provided.
 これによれば、映像処理装置は、長方形の表示画面を備える。これにより、当該長方形の表示画面において観察する方向にかかわらず、映像本来の観察すべき向きで、立体映像を観察することができる。 According to this, the video processing apparatus includes a rectangular display screen. As a result, regardless of the direction of observation on the rectangular display screen, a stereoscopic image can be observed in the original direction to be observed.
 また、好ましくは、さらに、長方形の前記表示画面を備えるとともに、前記表示画面の外周縁を形成する4辺のうちのいずれか2辺のそれぞれに対応する位置に、前記眼鏡を検知するセンサをそれぞれ備え、前記判別部は、前記センサが前記眼鏡を検知した場合に、前記眼鏡を検知したセンサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、前記映像回転部は、検知された前記眼鏡を装着した観察者用の映像として、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する。 Preferably, the sensor further includes the rectangular display screen and detects the glasses at positions corresponding to any two of the four sides forming the outer periphery of the display screen. The discriminating unit discriminates the side of the display screen that is closest to the sensor that has detected the spectacles as the side of the display screen that is closest to the observation position when the sensor detects the spectacles; The image rotation unit generates a rotation image obtained by rotating the input image so that the side determined by the determination unit becomes a bottom side as the detected image for the observer wearing the glasses.
 これによれば、映像処理装置は、長方形の表示画面を備えるとともに、当該表示画面の4辺のうちのいずれか2辺の近くに、眼鏡を検知するセンサをそれぞれ備え、当該センサが眼鏡を検知した場合に、眼鏡を検知したセンサに最も近い表示画面の辺が底辺となるように、入力映像を回転させた回転映像を生成する。ここで、当該眼鏡は観察者が装着しているので、当該2辺の近くに眼鏡が配置され検出されれば、当該眼鏡の位置が観察者の観察位置となる。このため、当該2辺において、当該眼鏡に最も近い表示画面の辺が底辺となるように、映像が表示画面に表示されることで、当該観察者は、映像本来の観察すべき向きで観察することができる。 According to this, the video processing apparatus includes a rectangular display screen, and also includes sensors for detecting glasses near any two of the four sides of the display screen, and the sensors detect the glasses. In this case, a rotation image is generated by rotating the input image so that the side of the display screen closest to the sensor that has detected the glasses is the bottom side. Here, since the observer wears the glasses, if the glasses are arranged and detected near the two sides, the position of the glasses becomes the observation position of the observer. For this reason, in the two sides, the image is displayed on the display screen so that the side of the display screen closest to the glasses is the bottom side, so that the observer observes in the original direction to be observed. be able to.
 また、好ましくは、前記2辺は、前記表示画面の外周縁を形成する4辺のうちの向かい合う2辺であり、前記映像回転部は、前記2辺に対応する観察位置のうちの一方の観察位置における観察者用の入力映像を180度回転させて前記回転映像を生成する。 Preferably, the two sides are two opposite sides among the four sides forming the outer peripheral edge of the display screen, and the video rotation unit is configured to observe one of the observation positions corresponding to the two sides. The input image for the observer at the position is rotated 180 degrees to generate the rotated image.
 これによれば、映像処理装置は、表示画面の4辺のうちの向かい合う2辺に対応する観察位置のうちの一方の観察位置における観察者用の入力映像を180度回転させて、回転映像を生成する。つまり、2人の観察者が表示画面を挟んで向かい合っていれば、2人の観察者が観察する映像は、上下が逆向きになる。このため、いずれか一方の観察者用の入力映像を180度回転させることで、2人の観察者双方ともに、映像本来の観察すべき向きで観察することができる。 According to this, the video processing device rotates the input video for the observer at one of the observation positions corresponding to two opposite sides of the four sides of the display screen by 180 degrees, and displays the rotated video. Generate. That is, if two observers face each other across the display screen, the image observed by the two observers is upside down. For this reason, by rotating the input image for one of the observers by 180 degrees, both of the two observers can observe the original image to be observed.
 また、好ましくは、さらに、長方形の前記表示画面を備えるとともに、前記表示画面の外周縁を形成する4辺のうちのいずれか3辺のそれぞれに対応する位置に、前記眼鏡を検知するセンサをそれぞれ備え、前記判別部は、前記センサが前記眼鏡を検知した場合に、前記眼鏡を検知したセンサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、前記映像回転部は、前記判別部が判別した辺が前記3辺のうちの第一辺の場合、前記第一辺に対応する観察位置における観察者用の入力映像を180度回転させ、前記判別部が判別した辺が前記3辺のうちの第二辺の場合、前記第二辺に対応する観察位置における観察者用の入力映像を90度回転させて、前記回転映像を生成する。 Preferably, the sensor further includes the rectangular display screen and detects the glasses at positions corresponding to any three of the four sides forming the outer periphery of the display screen. The discriminating unit discriminates the side of the display screen that is closest to the sensor that has detected the spectacles as the side of the display screen that is closest to the observation position when the sensor detects the spectacles; When the side determined by the determination unit is the first side of the three sides, the video rotation unit rotates the input image for the observer at the observation position corresponding to the first side by 180 degrees, and the determination unit When the side determined is the second side of the three sides, the input image for the observer at the observation position corresponding to the second side is rotated by 90 degrees to generate the rotated image.
 これによれば、映像処理装置は、長方形の表示画面と、表示画面の4辺のうちのいずれか3辺の近くに眼鏡を検知するセンサをそれぞれ備え、眼鏡を検知したセンサに最も近い表示画面の辺が3辺のうちの第一辺の場合、第一辺の近くの観察位置における観察者用の入力映像を180度回転させ、当該辺が第二辺の場合、第二辺の近くの観察位置における観察者用の入力映像を90度回転させて、回転映像を生成する。つまり、3人の観察者のそれぞれが表示画面の3辺から観察している場合は、1人用の映像に対して他の2人用の映像を180度と90度とに回転させる必要がある。このため、観察者用の入力映像を180度と90度回転させることで、3人の観察者ともに、映像本来の観察すべき向きで観察することができる。 According to this, the video processing apparatus includes a rectangular display screen and a sensor for detecting glasses near any three of the four sides of the display screen, and the display screen closest to the sensor that has detected the glasses. If the side is the first side of the three sides, the input image for the observer at the observation position near the first side is rotated 180 degrees, and if the side is the second side, The input image for the observer at the observation position is rotated by 90 degrees to generate a rotated image. That is, when each of the three observers is observing from three sides of the display screen, it is necessary to rotate the other two-person video to 180 degrees and 90 degrees with respect to the one-person video. is there. For this reason, by rotating the input image for the observer by 180 degrees and 90 degrees, all three observers can observe the original image to be observed.
 また、好ましくは、前記映像回転部は、前記90度回転された入力映像のサイズを前記表示画面のサイズに調整して前記回転映像を生成する映像加工部を備える。 Preferably, the video rotation unit includes a video processing unit that generates the rotated video by adjusting the size of the input video rotated 90 degrees to the size of the display screen.
 これによれば、映像処理装置は、90度回転した入力映像のサイズを表示画面のサイズに調整して回転映像を生成する。つまり、表示画面が長方形の場合には、映像を90度回転させると、回転後の映像が表示画面のサイズと異なるサイズになる。このため、回転後の映像を表示画面のサイズに調整することで、観察者は、映像を観察することができる。 According to this, the video processing apparatus adjusts the size of the input video rotated 90 degrees to the size of the display screen, and generates a rotated video. That is, when the display screen is rectangular, when the image is rotated 90 degrees, the rotated image becomes a size different from the size of the display screen. For this reason, the observer can observe an image | video by adjusting the image | video after rotation to the size of a display screen.
 また、好ましくは、前記映像加工部は、前記90度回転された入力映像のサイズを、映像全体が前記表示画面に表示されるように縮小するレターボックス化を行うことで、前記表示画面のサイズに調整する。 Preferably, the image processing unit performs letterboxing to reduce the size of the input image rotated by 90 degrees so that the entire image is displayed on the display screen. Adjust to.
 これによれば、映像処理装置は、90度回転した入力映像のサイズを、映像全体が表示画面に表示されるように縮小するレターボックス化を行う。つまり、回転して表示画面のサイズと異なるサイズになった映像を、レターボックス化によりサイズ調整することで、観察者は、映像の全体を観察することができる。 According to this, the video processing device performs letterboxing that reduces the size of the input video rotated 90 degrees so that the entire video is displayed on the display screen. In other words, the observer can observe the entire video by adjusting the size of the video that has been rotated to a size different from the size of the display screen by letterboxing.
 また、好ましくは、さらに、長方形の前記表示画面を備えるとともに、前記表示画面の外周縁を形成する4辺のそれぞれに対応する位置に、前記眼鏡を検知するセンサをそれぞれ備え、前記判別部は、前記センサが前記眼鏡を検知した場合に、前記眼鏡を検知したセンサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、前記映像回転部は、前記判別部が判別した辺が前記4辺のうちの第一辺の場合、前記第一辺に対応する観察位置における観察者用の入力映像を180度回転させ、前記判別部が判別した辺が前記4辺のうちの第二辺の場合、前記第二辺に対応する観察位置における観察者用の入力映像を左方向に90度回転させ、前記判別部が判別した辺が前記4辺のうちの第三辺の場合、前記第三辺に対応する観察位置における観察者用の入力映像を右方向に90度回転させて、前記回転映像を生成する。 In addition, preferably, it further includes the rectangular display screen, and further includes sensors for detecting the glasses at positions corresponding to the four sides forming the outer periphery of the display screen, and the determination unit includes: When the sensor detects the glasses, the side of the display screen that is closest to the sensor that has detected the glasses is determined as the side of the display screen that is closest to the observation position, and the video rotation unit is When the side determined by the unit is the first side among the four sides, the input image for the observer at the observation position corresponding to the first side is rotated 180 degrees, and the side determined by the determination unit is the 4th side. In the case of the second side of the sides, the input image for the observer at the observation position corresponding to the second side is rotated 90 degrees to the left, and the side determined by the determination unit is the first of the four sides. In the case of three sides, Rotate 90 degrees to the right input image for the observer at the observation position corresponding to the edge, it generates the rotation image.
 これによれば、映像処理装置は、長方形の表示画面と、表示画面の4辺の近くに眼鏡を検知するセンサをそれぞれ備え、眼鏡を検知したセンサに最も近い表示画面の辺が当該4辺のうちの第一辺の場合、第一辺の近くの観察位置における観察者用の入力映像を180度回転させ、当該辺が第二辺の場合、第二辺の近くの観察位置における観察者用の入力映像を左方向に90度回転させ、当該辺が第三辺の場合、第三辺の近くの観察位置における観察者用の入力映像を右方向に90度回転させて、回転映像を生成する。つまり、4人の観察者のそれぞれが表示画面の4辺から観察している場合は、1人用の映像に対して他の3人用の映像を180度と左90度と右90度とに回転させる必要がある。このため、観察者用の入力映像を180度と左90度と右90度とに回転させることで、3人の観察者ともに、映像本来の観察すべき向きで観察することができる。 According to this, the video processing apparatus includes a rectangular display screen and a sensor for detecting glasses near the four sides of the display screen, and the side of the display screen closest to the sensor that has detected the glasses is the four sides. In the case of the first side, the input image for the observer at the observation position near the first side is rotated by 180 degrees, and when the side is the second side, for the observer at the observation position near the second side Is rotated 90 degrees to the left, and if that side is the third side, the input image for the observer at the observation position near the third side is rotated 90 degrees to the right to generate a rotated image. To do. That is, when each of the four observers is observing from the four sides of the display screen, the video for one person is compared with the video for the other three at 180 degrees, 90 degrees to the left, and 90 degrees to the right. Need to be rotated. Therefore, by rotating the input image for the observer to 180 degrees, 90 degrees to the left, and 90 degrees to the right, all three observers can observe the original image in the direction to be observed.
 また、好ましくは、さらに、前記表示画面と、前記表示画面の観察側上面に配置される光学レンズとを備え、前記映像出力部は、前記映像回転部が生成した回転映像を含む複数の映像が複数の領域に分割され合成されて得られる合成映像を出力し、前記光学レンズは、前記表示画面の複数の領域に分割して表示された前記合成映像を、前記表示画面の1辺の側からは分割前の前記複数の映像のうちの1つの映像が観察されるように屈折させる。 Preferably, the video output unit further includes the display screen and an optical lens disposed on an upper surface on the observation side of the display screen, and the video output unit includes a plurality of videos including the rotated video generated by the video rotation unit. A composite image obtained by being divided and combined into a plurality of regions is output, and the optical lens is configured to display the composite image displayed by being divided into a plurality of regions of the display screen from one side of the display screen. Refracts so that one of the plurality of images before division is observed.
 これによれば、映像処理装置は、表示画面と光学レンズとを備え、光学レンズは、表示画面の複数の領域に分割して表示された合成映像を、表示画面の1辺の側からは分割前の複数の映像のうちの1つの映像が観察されるように屈折させる。つまり、光学レンズとしてレンチキュラレンズを用いることで、表示画面の1辺の側から1つの映像が観察されるようになるため、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができる。 According to this, the video processing apparatus includes a display screen and an optical lens, and the optical lens divides a composite video displayed by being divided into a plurality of areas of the display screen from one side of the display screen. The image is refracted so that one of the plurality of previous images is observed. In other words, by using a lenticular lens as an optical lens, one image can be observed from one side of the display screen. Therefore, the image should be observed in the original direction to be observed regardless of the observation direction. Can do.
 また、好ましくは、前記表示画面は長方形であり、前記光学レンズは、前記表示画面の外周縁を形成する4辺のうちの向かい合う2辺のそれぞれの方向から前記複数の映像のうちの1つの映像が観察されるように映像を屈折させる。 Preferably, the display screen is rectangular, and the optical lens is one of the plurality of images from each of two opposite sides of the four sides forming the outer periphery of the display screen. Refracts the image so that is observed.
 これによれば、表示画面は長方形であり、光学レンズは、表示画面の4辺のうちの向かい合う2辺のそれぞれの方向から複数の映像のうちの1つの映像が観察されるように映像を屈折させる。このため、観察者が当該2辺の方向から観察する場合に、映像本来の観察すべき向きで観察することができる。 According to this, the display screen is rectangular, and the optical lens refracts the image so that one image of the plurality of images is observed from each of two opposite sides of the four sides of the display screen. Let For this reason, when an observer observes from the direction of the two sides, the image can be observed in the original direction to be observed.
 また、好ましくは、前記表示画面は長方形であり、前記光学レンズは、前記表示画面の外周縁を形成する4辺のそれぞれの方向から前記複数の映像のうちの1つの映像が観察されるように映像を屈折させる。 Preferably, the display screen is rectangular, and the optical lens is configured such that one of the plurality of images is observed from each of four directions forming an outer peripheral edge of the display screen. Refract the image.
 これによれば、表示画面は長方形であり、光学レンズは、表示画面の4辺のそれぞれの方向から複数の映像のうちの1つの映像が観察されるように映像を屈折させる。このため、観察者が当該4辺のいずれの方向から観察した場合でも、映像本来の観察すべき向きで観察することができる。 According to this, the display screen is rectangular, and the optical lens refracts the image so that one image of the plurality of images is observed from each of the four sides of the display screen. For this reason, even when the observer observes from any direction of the four sides, the image can be observed in the original observation direction.
 また、好ましくは、さらに、前記観察者が前記映像処理装置を制御するために用いるコントローラが発する信号を検知するセンサを備え、前記判別部は、前記コントローラが発する信号を前記センサが検知した場合、前記センサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、前記映像回転部は、前記コントローラを用いる観察者用の映像として、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する。 Preferably, the sensor further includes a sensor that detects a signal emitted by a controller used by the observer to control the video processing device, and the determination unit detects the signal emitted by the controller when the sensor detects: The side of the display screen that is closest to the sensor is determined as the side of the display screen that is closest to the observation position, and the video rotation unit is determined as the video for the observer using the controller. A rotated image is generated by rotating the input image so that the side becomes the bottom side.
 これによれば、映像処理装置は、観察者が用いるコントローラが発する信号を検知するセンサを備え、当該コントローラを検知したセンサに最も近い表示画面の辺が底辺となるように、入力映像を回転させた回転映像を生成する。ここで、当該コントローラは観察者が用いる機器であるので、観察者は、当該コントローラを検知したセンサの近くで観察していると考えられる。このため、当該センサに最も近い表示画面の辺が底辺となるように、映像が表示画面に表示されることで、当該観察者は、映像本来の観察すべき向きで観察することができる。 According to this, the video processing apparatus includes a sensor that detects a signal generated by the controller used by the observer, and rotates the input video so that the side of the display screen that is closest to the sensor that has detected the controller is the base. Generate rotated images. Here, since the said controller is an apparatus which an observer uses, it is thought that the observer is observing near the sensor which detected the said controller. For this reason, the image can be displayed on the display screen so that the side of the display screen closest to the sensor is the base, so that the observer can observe the image in the original observation direction.
 また、好ましくは、さらに、画像を撮像するカメラと、前記カメラが撮像した画像内に写った人物を認識して観察者として検出する人物認識部とを備え、前記判別部は、前記人物認識部が検出した観察者に最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、前記映像回転部は、前記人物認識部が検出した観察者用の映像として、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する。 Preferably, the image processing apparatus further includes a camera that captures an image, and a person recognition unit that recognizes a person captured in the image captured by the camera and detects the person as an observer, and the determination unit includes the person recognition unit. The side of the display screen that is closest to the observer detected is determined as the side of the display screen that is closest to the observation position, and the video rotation unit is an image for the observer detected by the person recognition unit, A rotated image is generated by rotating the input image so that the side determined by the determining unit is the bottom side.
 これによれば、映像処理装置は、カメラが撮像した画像内に写った人物を認識して観察者として検出して、検出した観察者に最も近い表示画面の辺を、観察者の観察位置に最も近い表示画面の辺として判別し、入力映像を回転させる。つまり、カメラを用いて観察者を撮像することで、観察者に最も近い表示画面の辺が底辺となるように、映像が表示画面に表示される。このため、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができる。 According to this, the video processing apparatus recognizes a person shown in an image captured by the camera, detects it as an observer, and sets the side of the display screen closest to the detected observer as the observation position of the observer. Identify the side of the closest display screen and rotate the input video. That is, by capturing an image of the observer using a camera, an image is displayed on the display screen so that the side of the display screen closest to the observer is the bottom side. For this reason, it is possible to observe in the original direction to be observed regardless of the observation direction.
 また、好ましくは、さらに、ネットワークインタフェース、メモリーカードインタフェース、光ディスクドライブ及びチューナのうち少なくとも1つを有し、映像信号を取得する取得部と、取得された前記映像信号を復号する復号部と、前記復号部が復号し、前記映像出力部が出力した映像を表示する表示画面とを備える。 Preferably, the information processing apparatus further includes at least one of a network interface, a memory card interface, an optical disk drive, and a tuner, and acquires an image signal, a decoding unit that decodes the acquired image signal, And a display screen for displaying the video decoded by the decoding unit and output from the video output unit.
 これによれば、映像処理装置は、映像信号を取得し、取得した映像信号を復号し、映像を出力して表示画面に表示する。このように、映像処理装置は、映像信号を取得し、処理して表示するモバイル情報端末(タブレット端末)などのシステムとして実現することもできる。 According to this, the video processing apparatus acquires the video signal, decodes the acquired video signal, outputs the video, and displays it on the display screen. In this way, the video processing apparatus can also be realized as a system such as a mobile information terminal (tablet terminal) that acquires a video signal, processes it, and displays it.
 また、本発明は、このような映像処理装置として実現することができるだけでなく、当該映像処理装置が備える特徴的な処理部の処理をステップとする映像処理方法としても実現することができる。また、本発明は、このような映像処理装置に含まれる特徴的な処理部を備える集積回路としても実現することができる。 Further, the present invention can be realized not only as such a video processing apparatus, but also as a video processing method in which processing of a characteristic processing unit included in the video processing apparatus is used as a step. The present invention can also be realized as an integrated circuit including a characteristic processing unit included in such a video processing apparatus.
 また、本発明は、映像処理方法に含まれる特徴的な処理をコンピュータに実行させるプログラムとして実現したりすることもできる。そして、そのようなプログラムは、CD-ROM等の記録媒体及びインターネット等の伝送媒体を介して流通させることができるのは言うまでもない。 The present invention can also be realized as a program that causes a computer to execute characteristic processing included in the video processing method. Needless to say, such a program can be distributed via a recording medium such as a CD-ROM and a transmission medium such as the Internet.
 本発明によると、タブレット端末などの映像処理装置において、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができる。 According to the present invention, in an image processing apparatus such as a tablet terminal, it is possible to observe an image in an original direction to be observed regardless of the observation direction.
図1は、本発明の実施の形態1に係る映像処理装置の外観を示す外観図である。FIG. 1 is an external view showing an external appearance of a video processing apparatus according to Embodiment 1 of the present invention. 図2は、本発明の実施の形態1に係る映像処理装置の機能構成を示すブロック図である。FIG. 2 is a block diagram showing a functional configuration of the video processing apparatus according to Embodiment 1 of the present invention. 図3は、本発明の実施の形態1に係る立体映像再生部の機能構成を示すブロック図である。FIG. 3 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit according to Embodiment 1 of the present invention. 図4は、本発明の実施の形態1に係る映像回転部の機能構成を示すブロック図である。FIG. 4 is a block diagram showing a functional configuration of the video rotation unit according to Embodiment 1 of the present invention. 図5は、本発明の実施の形態1に係る映像処理装置が行う映像処理を説明するためのフローチャートである。FIG. 5 is a flowchart for explaining video processing performed by the video processing apparatus according to Embodiment 1 of the present invention. 図6は、本発明の実施の形態1に係る判別部が表示画面の辺を判別する処理を説明するためのフローチャートである。FIG. 6 is a flowchart for explaining processing in which the determination unit according to Embodiment 1 of the present invention determines the side of the display screen. 図7は、本発明の実施の形態1に係る判別部が表示画面の辺を判別する処理を説明するための図である。FIG. 7 is a diagram for explaining processing in which the determination unit according to Embodiment 1 of the present invention determines the sides of the display screen. 図8は、本発明の実施の形態1に係る映像回転部が回転映像を生成する処理を説明するためのフローチャートである。FIG. 8 is a flowchart for explaining a process in which the video rotation unit according to Embodiment 1 of the present invention generates a rotated video. 図9は、本発明の実施の形態1に係る映像入出力部が入力する入力映像を説明するための図である。FIG. 9 is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention. 図10Aは、本発明の実施の形態1に係る映像入出力部が入力する入力映像を説明するための図である。FIG. 10A is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention. 図10Bは、本発明の実施の形態1に係る映像入出力部が入力する入力映像を説明するための図である。FIG. 10B is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention. 図10Cは、本発明の実施の形態1に係る映像入出力部が入力する入力映像を説明するための図である。FIG. 10C is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention. 図11Aは、本発明の実施の形態1に係る映像入出力部が入力する入力映像を説明するための図である。FIG. 11A is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention. 図11Bは、本発明の実施の形態1に係る映像入出力部が入力する入力映像を説明するための図である。FIG. 11B is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention. 図11Cは、本発明の実施の形態1に係る映像入出力部が入力する入力映像を説明するための図である。FIG. 11C is a diagram for explaining an input video input by the video input / output unit according to Embodiment 1 of the present invention. 図12Aは、本発明の実施の形態1に係る回転制御部が入力映像を回転させる処理を説明するための図である。FIG. 12A is a diagram for describing a process in which the rotation control unit according to Embodiment 1 of the present invention rotates an input video. 図12Bは、本発明の実施の形態1に係る回転制御部が入力映像を回転させる処理を説明するための図である。FIG. 12B is a diagram for describing processing in which the rotation control unit according to Embodiment 1 of the present invention rotates the input video. 図13Aは、本発明の実施の形態1に係る回転制御部が入力映像を回転させる処理を説明するための図である。FIG. 13A is a diagram for describing processing in which the rotation control unit according to Embodiment 1 of the present invention rotates an input image. 図13Bは、本発明の実施の形態1に係る回転制御部が入力映像を回転させる処理を説明するための図である。FIG. 13B is a diagram for describing processing in which the rotation control unit according to Embodiment 1 of the present invention rotates the input video. 図14は、本発明の実施の形態1に係る映像出力部が回転映像を出力する処理を説明するためのフローチャートである。FIG. 14 is a flowchart for explaining processing in which the video output unit according to Embodiment 1 of the present invention outputs a rotated video. 図15は、本発明の実施の形態1に係る映像出力部が回転映像を出力する処理を説明するための図である。FIG. 15 is a diagram for explaining processing in which the video output unit according to Embodiment 1 of the present invention outputs a rotated video. 図16は、本発明の実施の形態2に係る映像処理装置の外観を示す外観図である。FIG. 16 is an external view showing the external appearance of the video processing apparatus according to Embodiment 2 of the present invention. 図17は、本発明の実施の形態2に係る立体映像再生部の機能構成を示すブロック図である。FIG. 17 is a block diagram illustrating a functional configuration of the stereoscopic video reproduction unit according to Embodiment 2 of the present invention. 図18は、本発明の実施の形態2に係る映像回転部の機能構成を示すブロック図である。FIG. 18 is a block diagram showing a functional configuration of the video rotation unit according to Embodiment 2 of the present invention. 図19は、本発明の実施の形態2に係る映像回転部が回転映像を生成する処理を説明するためのフローチャートである。FIG. 19 is a flowchart for explaining processing of generating a rotated image by the image rotation unit according to Embodiment 2 of the present invention. 図20は、本発明の実施の形態2に係る映像回転部が回転映像を生成する処理を説明するための図である。FIG. 20 is a diagram for explaining processing in which the video rotation unit according to Embodiment 2 of the present invention generates a rotated video. 図21は、本発明の実施の形態2に係る映像出力部が映像を出力する処理を説明するための図である。FIG. 21 is a diagram for explaining a process in which the video output unit according to Embodiment 2 of the present invention outputs a video. 図22は、本発明の実施の形態3に係る映像処理装置の外観を示す外観図である。FIG. 22 is an external view showing the external appearance of the video processing apparatus according to Embodiment 3 of the present invention. 図23は、本発明の実施の形態3に係る光学レンズの外観を示す外観図である。FIG. 23 is an external view showing the external appearance of the optical lens according to Embodiment 3 of the present invention. 図24は、本発明の実施の形態3に係る映像処理装置が表示画面に映像を表示させる過程を示す図である。FIG. 24 is a diagram illustrating a process in which the video processing apparatus according to the third embodiment of the present invention displays a video on the display screen. 図25は、本発明の実施の形態3の変形例1に係る映像処理装置の外観を示す外観図である。FIG. 25 is an external view showing an external appearance of a video processing apparatus according to Modification 1 of Embodiment 3 of the present invention. 図26は、本発明の実施の形態3の変形例1に係る映像処理装置が表示画面に映像を表示させる過程を示す図である。FIG. 26 is a diagram illustrating a process in which the video processing device according to the first modification of the third embodiment of the present invention displays a video on the display screen. 図27は、本発明の実施の形態3の変形例2に係る映像処理装置の外観を示す外観図である。FIG. 27 is an external view showing an external appearance of a video processing apparatus according to Modification 2 of Embodiment 3 of the present invention. 図28は、本発明の実施の形態4に係る立体映像再生部の機能構成を示すブロック図である。FIG. 28 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit according to Embodiment 4 of the present invention. 図29は、本発明の実施の形態4に係る立体映像再生部が表示画面の辺を判別する処理を説明するためのフローチャートである。FIG. 29 is a flowchart for explaining processing in which the stereoscopic video reproduction unit according to Embodiment 4 of the present invention determines the side of the display screen. 図30は、本発明の実施の形態4に係る立体映像再生部が表示画面の辺を判別する処理を説明するための図である。FIG. 30 is a diagram for describing processing in which the stereoscopic video reproduction unit according to Embodiment 4 of the present invention determines the sides of the display screen. 図31は、本発明の実施の形態4に係る立体映像再生部が表示画面の辺を判別する処理を説明するための図である。FIG. 31 is a diagram for explaining processing in which the stereoscopic video reproduction unit according to Embodiment 4 of the present invention determines the side of the display screen. 図32は、本発明の実施の形態4の変形例に係る立体映像再生部が表示画面の辺を判別する処理を説明するためのフローチャートである。FIG. 32 is a flowchart for explaining a process in which the stereoscopic video reproduction unit according to the modification of the fourth embodiment of the present invention determines the side of the display screen. 図33は、本発明の実施の形態4の変形例に係る立体映像再生部が表示画面の辺を判別する処理を説明するための図である。FIG. 33 is a diagram for explaining a process in which the stereoscopic video reproduction unit according to the modification of the fourth embodiment of the present invention determines the side of the display screen. 図34は、本発明の実施の形態4の変形例に係る立体映像再生部が表示画面の辺を判別する処理を説明するための図である。FIG. 34 is a diagram for explaining processing in which the stereoscopic video reproduction unit according to the modification of the fourth embodiment of the present invention determines the side of the display screen. 図35は、本発明の実施の形態及びその変形例に係る映像処理装置の最小構成を示す図である。FIG. 35 is a diagram showing a minimum configuration of a video processing apparatus according to an embodiment of the present invention and a modification thereof.
 以下、図面を参照しながら、本発明の実施の形態に係る映像処理装置について説明する。 Hereinafter, a video processing apparatus according to an embodiment of the present invention will be described with reference to the drawings.
 (実施の形態1)
 図1は、本発明の実施の形態1に係る映像処理装置10の外観を示す外観図である。
(Embodiment 1)
FIG. 1 is an external view showing the external appearance of a video processing apparatus 10 according to Embodiment 1 of the present invention.
 映像処理装置10は、タブレット端末などのモバイル情報端末であり、表示画面に表示される入力映像を観察者が観察するために、当該入力映像の処理を行う。同図に示すように、映像処理装置10は、3DグラスセンサL111、3DグラスセンサR112、トランスミッタL151、トランスミッタR152、及び表示画面600を備えている。 The video processing apparatus 10 is a mobile information terminal such as a tablet terminal, and processes the input video so that an observer can observe the input video displayed on the display screen. As shown in the figure, the video processing apparatus 10 includes a 3D glass sensor L111, a 3D glass sensor R112, a transmitter L151, a transmitter R152, and a display screen 600.
 ここで、眼鏡20は、観察者Aが装着している眼鏡であり、表示画面600に左眼用の映像と右眼用の映像とが表示される時刻に同期して、左眼用シャッタと右眼用シャッタとを開閉するシャッタ眼鏡である。眼鏡20は、トランスミッタ21とレシーバ22とを備えている。また、眼鏡30は、眼鏡20と同様に、観察者Bが装着しているシャッタ眼鏡であり、トランスミッタ31とレシーバ32とを備えている。 Here, the glasses 20 are glasses worn by the observer A, and the left-eye shutter is synchronized with the time when the left-eye video and the right-eye video are displayed on the display screen 600. Shutter glasses for opening and closing the right-eye shutter. The glasses 20 include a transmitter 21 and a receiver 22. Similarly to the glasses 20, the glasses 30 are shutter glasses worn by the observer B, and include a transmitter 31 and a receiver 32.
 表示画面600は、入力される映像を表示する長方形の画面である。つまり、表示画面600は、外周縁に4つの辺を有している。 The display screen 600 is a rectangular screen that displays an input video. That is, the display screen 600 has four sides on the outer periphery.
 3DグラスセンサL111および3DグラスセンサR112は、それぞれ観察者が用いるデバイスが発する信号を検知するセンサである。具体的には、3DグラスセンサL111および3DグラスセンサR112は、眼鏡20のトランスミッタ21が発する信号もしくは眼鏡30のトランスミッタ31が発する信号を検知する。 The 3D glass sensor L111 and the 3D glass sensor R112 are sensors that detect signals emitted from devices used by the observer. Specifically, the 3D glass sensor L111 and the 3D glass sensor R112 detect a signal emitted from the transmitter 21 of the glasses 20 or a signal emitted from the transmitter 31 of the glasses 30.
 トランスミッタ21、31が発する信号は、それぞれ例えば電波または赤外線であるが、どのような信号であってもかまわない。信号の検知は、センサとトランスミッタとの間の距離に依存するので、眼鏡の位置によって強い信号として検知されるか、あるいは弱い信号として検知されるか、あるいは検知されないかが変化する。 The signals emitted from the transmitters 21 and 31 are, for example, radio waves or infrared rays, but any signal may be used. Since detection of the signal depends on the distance between the sensor and the transmitter, whether the signal is detected as a strong signal, a weak signal, or not detected varies depending on the position of the glasses.
 また、眼鏡20のトランスミッタ21が発する信号は眼鏡20の位置によっては3DグラスセンサL111および3DグラスセンサR112の両方で検知される。この場合、より近い3Dグラスセンサの方でより強い信号として検出される。そして、映像処理装置10は、より強い信号として検出した3Dグラスセンサの近くに眼鏡20の観察者が存在するものとして、制御を行う。眼鏡30についても同様である。 Further, depending on the position of the glasses 20, the signal emitted from the transmitter 21 of the glasses 20 is detected by both the 3D glass sensor L 111 and the 3D glass sensor R 112. In this case, it is detected as a stronger signal by the closer 3D glass sensor. Then, the video processing apparatus 10 performs control assuming that the observer of the glasses 20 exists near the 3D glass sensor detected as a stronger signal. The same applies to the glasses 30.
 図1では、眼鏡20は3DグラスセンサL111の近くに、また、眼鏡30は3DグラスセンサR112の近くにある。以降では簡単のため、眼鏡20のトランスミッタ21が発する信号は3DグラスセンサL111で検知され、眼鏡30のトランスミッタ31が発する信号は3DグラスセンサR112で検知されるものとして説明する。 In FIG. 1, the glasses 20 are near the 3D glass sensor L111, and the glasses 30 are near the 3D glass sensor R112. Hereinafter, for the sake of simplicity, it is assumed that the signal emitted from the transmitter 21 of the glasses 20 is detected by the 3D glass sensor L111 and the signal emitted from the transmitter 31 of the glasses 30 is detected by the 3D glass sensor R112.
 このように、3DグラスセンサL111及び3DグラスセンサR112は、表示画面600の外周縁を形成する4辺のうちのいずれか2辺のそれぞれに対応する位置に配置された、眼鏡を検知するセンサである。なお、当該2辺は、表示画面600の外周縁を形成する4辺のうちの向かい合う2辺である。 As described above, the 3D glass sensor L111 and the 3D glass sensor R112 are sensors that detect glasses that are arranged at positions corresponding to any two of the four sides forming the outer periphery of the display screen 600. is there. The two sides are two sides facing each other among the four sides forming the outer peripheral edge of the display screen 600.
 トランスミッタL151は、表示画面600に左眼用の映像と右眼用の映像とが表示される時刻に同期して、眼鏡20の左眼用シャッタと右眼用シャッタとを開閉するための同期信号を、眼鏡20のレシーバ22に発する。眼鏡20は、レシーバ22が受信した当該同期信号によって、左眼用シャッタと右眼用シャッタとを開閉する。 The transmitter L151 is a synchronization signal for opening and closing the left eye shutter and the right eye shutter of the glasses 20 in synchronization with the time when the left eye video and the right eye video are displayed on the display screen 600. To the receiver 22 of the glasses 20. The glasses 20 open and close the left-eye shutter and the right-eye shutter according to the synchronization signal received by the receiver 22.
 トランスミッタR152は、トランスミッタL151と同様に、表示画面600に左眼用の映像と右眼用の映像とが表示される時刻に同期して、眼鏡30の左眼用シャッタと右眼用シャッタとを開閉するための同期信号を、眼鏡30のレシーバ32に発する。眼鏡30は、レシーバ32が受信した当該同期信号によって、左眼用シャッタと右眼用シャッタとを開閉する。 Similarly to the transmitter L151, the transmitter R152 synchronizes the left eye shutter and the right eye shutter of the glasses 30 in synchronization with the time when the left eye video and the right eye video are displayed on the display screen 600. A synchronization signal for opening and closing is issued to the receiver 32 of the glasses 30. The glasses 30 open and close the left-eye shutter and the right-eye shutter according to the synchronization signal received by the receiver 32.
 次に、映像処理装置10の詳細な構成について、説明する。 Next, the detailed configuration of the video processing apparatus 10 will be described.
 図2は、本発明の実施の形態1に係る映像処理装置10の機能構成を示すブロック図である。 FIG. 2 is a block diagram showing a functional configuration of the video processing apparatus 10 according to Embodiment 1 of the present invention.
 同図に示すように、映像処理装置10は、表示画面600の他に、立体映像再生部100、取得部200、セレクタ300、復号部400、オーディオ出力部500、ユーザ入力部700、制御部800、不揮発性メモリ910、及び揮発性メモリ920を備えている。 As shown in the figure, in addition to the display screen 600, the video processing apparatus 10 includes a stereoscopic video playback unit 100, an acquisition unit 200, a selector 300, a decoding unit 400, an audio output unit 500, a user input unit 700, and a control unit 800. A non-volatile memory 910 and a volatile memory 920.
 取得部200は、ネットワークインタフェース210、メモリーカードインタフェース220、光ディスクドライブ230及びチューナ240のうち少なくとも1つを有し、映像音声信号または映像のみの信号を取得する。具体的には、取得部200は、ネットワークインタフェース210またはチューナ240を介して、通信ネットワークまたは放送波から符号化ストリームを取得したり、メモリーカードインタフェース220または光ディスクドライブ230を介して、映像処理装置10に挿入されたメモリーカードまたは光ディスクから符号化ストリームを取得する。 The acquisition unit 200 includes at least one of the network interface 210, the memory card interface 220, the optical disk drive 230, and the tuner 240, and acquires a video / audio signal or a video-only signal. Specifically, the acquisition unit 200 acquires an encoded stream from a communication network or a broadcast wave via the network interface 210 or the tuner 240, or the video processing apparatus 10 via the memory card interface 220 or the optical disc drive 230. The encoded stream is acquired from the memory card or the optical disc inserted into the.
 セレクタ300は、取得部200が取得した符号化ストリームを復号部400に出力する。 The selector 300 outputs the encoded stream acquired by the acquisition unit 200 to the decoding unit 400.
 復号部400は、取得部200が取得した符号化ストリームを復号する。 The decoding unit 400 decodes the encoded stream acquired by the acquisition unit 200.
 オーディオ出力部500は、復号部400が符号化ストリームを復号して得られる音声信号を入力し、音声として出力する。 The audio output unit 500 inputs an audio signal obtained by the decoding unit 400 decoding the encoded stream and outputs it as audio.
 ユーザ入力部700は、ユーザによる入力操作を受け付ける。 The user input unit 700 accepts an input operation by the user.
 制御部800は、復号部400が符号化ストリームを復号して得られる情報やユーザ入力部700が取得した情報を用いて、立体映像再生部100を制御するための制御情報を立体映像再生部100に出力する。 The control unit 800 uses the information obtained by the decoding unit 400 to decode the encoded stream and the information acquired by the user input unit 700 to use the stereoscopic video reproduction unit 100 to control information for controlling the stereoscopic video reproduction unit 100. Output to.
 立体映像再生部100は、復号部400が符号化ストリームを復号して得られる入力映像を取得し、当該入力映像を処理して出力映像を生成し、当該出力映像を出力して表示画面600に表示させる。なお、立体映像再生部100の詳細な説明については、後述する。 The stereoscopic video reproduction unit 100 acquires an input video obtained by the decoding unit 400 decoding the encoded stream, processes the input video to generate an output video, outputs the output video, and displays the output video on the display screen 600. Display. The detailed description of the stereoscopic video reproduction unit 100 will be described later.
 不揮発性メモリ910及び揮発性メモリ920は、映像処理装置10が映像を処理するために用いられる情報等を記憶しているメモリである。 The non-volatile memory 910 and the volatile memory 920 are memories that store information used for the video processing apparatus 10 to process video.
 次に、立体映像再生部100の詳細な構成について、説明する。 Next, a detailed configuration of the stereoscopic video reproduction unit 100 will be described.
 図3は、本発明の実施の形態1に係る立体映像再生部100の機能構成を示すブロック図である。 FIG. 3 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit 100 according to Embodiment 1 of the present invention.
 同図に示すように、立体映像再生部100は、3DグラスセンサL111、3DグラスセンサR112、トランスミッタL151及びトランスミッタR152の他に、判別部110、映像制御部120、映像回転部130、映像出力部140及び同期信号生成部150を備えている。 As shown in the figure, in addition to the 3D glass sensor L111, the 3D glass sensor R112, the transmitter L151, and the transmitter R152, the stereoscopic video reproduction unit 100 includes a determination unit 110, a video control unit 120, a video rotation unit 130, and a video output unit. 140 and a synchronization signal generation unit 150.
 判別部110は、観察者の観察位置に最も近い表示画面600の辺を判別する。具体的には、判別部110は、観察者が用いるデバイスが発する信号をセンサが検知した場合、当該センサに最も近い表示画面600の辺を、当該観察位置に最も近い表示画面600の辺として判別する。 The discriminating unit 110 discriminates the side of the display screen 600 that is closest to the observer's observation position. Specifically, when the sensor detects a signal generated by the device used by the observer, the determination unit 110 determines the side of the display screen 600 closest to the sensor as the side of the display screen 600 closest to the observation position. To do.
 さらに具体的には、判別部110は、観察者が装着している立体映像の視聴用の眼鏡の位置に最も近い表示画面600の辺を、当該観察位置に最も近い表示画面600の辺として判別する。つまり、判別部110は、3DグラスセンサL111が眼鏡20を検知した場合に、眼鏡20を検知した3DグラスセンサL111に最も近い表示画面600の辺を、当該観察位置に最も近い表示画面600の辺として判別する。また、判別部110は、3DグラスセンサR112が眼鏡30を検知した場合に、眼鏡30を検知した3DグラスセンサR112に最も近い表示画面600の辺を、当該観察位置に最も近い表示画面600の辺として判別する。 More specifically, the determination unit 110 determines the side of the display screen 600 that is closest to the position of the glasses for viewing the stereoscopic image worn by the observer as the side of the display screen 600 that is closest to the observation position. To do. That is, when the 3D glass sensor L111 detects the glasses 20, the determination unit 110 determines the side of the display screen 600 that is closest to the 3D glass sensor L111 that has detected the glasses 20 as the side of the display screen 600 that is closest to the observation position. It is determined as In addition, when the 3D glass sensor R112 detects the glasses 30, the determination unit 110 determines the side of the display screen 600 that is closest to the 3D glass sensor R112 that has detected the glasses 30 as the side of the display screen 600 that is closest to the observation position. It is determined as
 映像制御部120は、映像回転部130、映像出力部140及び同期信号生成部150を制御する。具体的には、映像制御部120は、映像回転部130に判別部110が判別した辺を出力して映像を回転させるとともに、映像出力部140に映像の出力指示を与える。 The video control unit 120 controls the video rotation unit 130, the video output unit 140, and the synchronization signal generation unit 150. Specifically, the video control unit 120 outputs the sides determined by the determination unit 110 to the video rotation unit 130 to rotate the video, and gives a video output instruction to the video output unit 140.
 また、映像制御部120は、同期信号生成部150に同期信号を生成させ、複数の観察者が装着している各々のシャッタ眼鏡の左眼用シャッタと右眼用シャッタの開閉を制御する。つまり、映像制御部120は、観察者Aが装着している眼鏡20と、観察者Bが装着している眼鏡30について、左眼用シャッタと右眼用シャッタの開閉を制御する。 Also, the video control unit 120 causes the synchronization signal generation unit 150 to generate a synchronization signal, and controls the opening and closing of the left eye shutter and the right eye shutter of each shutter glasses worn by a plurality of observers. That is, the video control unit 120 controls the opening and closing of the left-eye shutter and the right-eye shutter for the glasses 20 worn by the observer A and the glasses 30 worn by the observer B.
 さらに具体的には、映像制御部120は、判別部110が判別した辺に対応した観察位置における観察者用の左眼用の映像が表示画面600に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタを開き、当該観察位置における観察者が装着しているシャッタ眼鏡の右眼用シャッタを閉じるとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じる。 More specifically, when the video for the left eye for the observer at the observation position corresponding to the side determined by the determination unit 110 is displayed on the display screen 600, the video control unit 120 observes at the observation position. Open the left eye shutter of the shutter glasses worn by the person, close the right eye shutter of the shutter eyeglasses worn by the observer at the observation position, and attach an observer other than the observer at the observation position. The left eye shutter and the right eye shutter of the shutter glasses are closed.
 また、映像制御部120は、判別部110が判別した辺に対応した観察位置における観察者用の右眼用の映像が表示画面600に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の右眼用シャッタを開き、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタを閉じるとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じる。 Further, when the right eye image for the observer at the observation position corresponding to the side determined by the determination unit 110 is displayed on the display screen 600, the image control unit 120 is worn by the observer at the observation position. The shutter for the right eye of the shutter glasses being opened, the shutter for the left eye of the shutter glasses worn by the observer at the observation position being closed, and the shutter being worn by an observer other than the observer at the observation position The left eye shutter and right eye shutter of the glasses are closed.
 同期信号生成部150は、映像制御部120の指示に従い、シャッタ眼鏡の左眼用シャッタと右眼用シャッタの開閉を行うための同期信号を生成する。具体的には、同期信号生成部150は、眼鏡20及び眼鏡30の左眼用シャッタと右眼用シャッタの開閉を行うための同期信号を生成して、トランスミッタL151及びトランスミッタR152を介して、生成した同期信号を眼鏡20及び眼鏡30に出力する。 The synchronization signal generation unit 150 generates a synchronization signal for opening and closing the left eye shutter and the right eye shutter of the shutter glasses according to the instruction of the video control unit 120. Specifically, the synchronization signal generation unit 150 generates a synchronization signal for opening and closing the left eye shutter and the right eye shutter of the glasses 20 and 30 and generates the synchronization signals via the transmitter L151 and the transmitter R152. The synchronized signal is output to the glasses 20 and 30.
 映像回転部130は、映像制御部120の指示に従い、判別部110が判別した辺が底辺となるように、入力映像を回転させた回転映像を生成する。具体的には、映像回転部130は、検知された眼鏡を装着した観察者用の映像として、表示画面600の2辺に対応する観察位置のうちの一方の観察位置における観察者用の入力映像を180度回転させて回転映像を生成する。以下に、詳細に説明する。 The image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 becomes the bottom in accordance with an instruction from the image control unit 120. Specifically, the image rotation unit 130 is an input image for the observer at one of the observation positions corresponding to the two sides of the display screen 600 as an image for the observer wearing the detected glasses. Is rotated 180 degrees to generate a rotated image. This will be described in detail below.
 図4は、本発明の実施の形態1に係る映像回転部130の機能構成を示すブロック図である。 FIG. 4 is a block diagram showing a functional configuration of the video rotation unit 130 according to Embodiment 1 of the present invention.
 同図に示すように、映像回転部130は、映像入出力部131、回転制御部132及びメモリ133を備えている。 As shown in the figure, the video rotation unit 130 includes a video input / output unit 131, a rotation control unit 132, and a memory 133.
 映像入出力部131は、入力映像として、立体映像を視聴するための左眼用の映像である左眼用映像と右眼用の映像である右眼用映像とを入力する。そして、映像入出力部131は、入力した左眼用映像と右眼用映像とをメモリ133に記憶させる。 The video input / output unit 131 inputs, as input videos, a left-eye video that is a left-eye video and a right-eye video that is a right-eye video for viewing a stereoscopic video. Then, the video input / output unit 131 stores the input left-eye video and right-eye video in the memory 133.
 そして、回転制御部132は、メモリ133から左眼用映像と右眼用映像とを読み出し、回転映像として、左眼用映像と右眼用映像とを回転させた回転左眼用映像と回転右眼用映像とを生成する。そして、回転制御部132は、生成した回転左眼用映像と回転右眼用映像とをメモリ133に記憶させる。 Then, the rotation control unit 132 reads the left-eye image and the right-eye image from the memory 133, and rotates the left-eye image and the right-eye image as the rotated image. Eye images are generated. Then, the rotation control unit 132 stores the generated rotation left eye image and rotation right eye image in the memory 133.
 そして、映像入出力部131は、メモリ133から回転左眼用映像と回転右眼用映像とを読み出し、映像出力部140に出力する。 Then, the video input / output unit 131 reads the rotated left-eye video and the rotated right-eye video from the memory 133, and outputs them to the video output unit 140.
 なお、映像回転部130は回転制御部132を備えておらず、映像入出力部131がメモリ133から映像を読み出すときに、左眼用映像と右眼用映像とを回転させた回転左眼用映像と回転右眼用映像とを読み出して、映像出力部140に出力することにしてもよい。 Note that the video rotation unit 130 does not include the rotation control unit 132, and when the video input / output unit 131 reads the video from the memory 133, the left-eye video and the right-eye video are rotated. The video and the rotating right eye video may be read out and output to the video output unit 140.
 図3に戻り、映像出力部140は、映像回転部130が生成した回転映像を出力する。つまり、映像出力部140は、映像回転部130が生成した回転左眼用映像及び回転右眼用映像を、観察者用の左眼用の映像及び右眼用の映像として、一定の時間間隔で交互に出力することで、当該一定の時間間隔で表示画面600に交互に表示させる。具体的には、映像出力部140は、映像制御部120の指示に従い、観察者A及び観察者B用の左眼用の映像及び右眼用の映像を一定の時間間隔で交互に出力する。 3, the video output unit 140 outputs the rotated video generated by the video rotation unit 130. In other words, the video output unit 140 converts the rotated left-eye video and the rotated right-eye video generated by the video rotating unit 130 into a left-eye video and a right-eye video for the observer at regular time intervals. By alternately outputting, the images are alternately displayed on the display screen 600 at the predetermined time interval. Specifically, the video output unit 140 alternately outputs a left-eye video and a right-eye video for the viewer A and the viewer B at regular time intervals in accordance with an instruction from the video control unit 120.
 次に、映像処理装置10が行う映像処理について、詳細に説明する。 Next, video processing performed by the video processing apparatus 10 will be described in detail.
 図5は、本発明の実施の形態1に係る映像処理装置10が行う映像処理を説明するためのフローチャートである。 FIG. 5 is a flowchart for explaining video processing performed by the video processing apparatus 10 according to Embodiment 1 of the present invention.
 同図に示すように、まず、判別部110は、観察者の観察位置に最も近い表示画面600の辺を判別する(S102)。この判別部110が表示画面600の辺を判別する処理の詳細な説明については、後述する。 As shown in the figure, first, the determination unit 110 determines the side of the display screen 600 that is closest to the observation position of the observer (S102). A detailed description of the process in which the determination unit 110 determines the sides of the display screen 600 will be described later.
 そして、映像回転部130は、判別部110が判別した辺が底辺となるように、入力映像を回転させた回転映像を生成する(S104)。この映像回転部130が回転映像を生成する処理の詳細な説明については、後述する。 Then, the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 is the bottom side (S104). A detailed description of the process in which the video rotation unit 130 generates a rotated video will be described later.
 そして、映像出力部140は、映像回転部130が生成した回転映像を出力する(S106)。この映像出力部140が回転映像を出力する処理の詳細な説明については、後述する。 Then, the video output unit 140 outputs the rotated video generated by the video rotating unit 130 (S106). A detailed description of the process in which the video output unit 140 outputs the rotated video will be described later.
 次に、判別部110が表示画面600の辺を判別する処理(図5のS102)の詳細について、説明する。 Next, details of the process (S102 in FIG. 5) in which the determination unit 110 determines the sides of the display screen 600 will be described.
 図6は、本発明の実施の形態1に係る判別部110が表示画面600の辺を判別する処理を説明するためのフローチャートである。 FIG. 6 is a flowchart for explaining processing in which the determination unit 110 according to Embodiment 1 of the present invention determines the sides of the display screen 600.
 図7は、本発明の実施の形態1に係る判別部110が表示画面600の辺を判別する処理を説明するための図である。 FIG. 7 is a diagram for explaining a process in which the determination unit 110 according to Embodiment 1 of the present invention determines the sides of the display screen 600.
 まず、図6に示すように、センサが観察者が用いるデバイスが発する信号を検知する(S202)。具体的には、図7に示すように、3DグラスセンサL111が眼鏡20を検知し、3DグラスセンサR112が眼鏡30を検知する。 First, as shown in FIG. 6, the sensor detects a signal emitted by a device used by an observer (S202). Specifically, as illustrated in FIG. 7, the 3D glass sensor L <b> 111 detects the glasses 20, and the 3D glass sensor R <b> 112 detects the glasses 30.
 そして、図6に示すように、判別部110は、検知したセンサに最も近い表示画面600の辺を判別する(S204)。 And as shown in FIG. 6, the discrimination | determination part 110 discriminate | determines the edge of the display screen 600 nearest to the detected sensor (S204).
 具体的には、図7に示すように、判別部110は、3DグラスセンサL111が眼鏡20のトランスミッタ21が発する信号を検知した場合に、あるいは眼鏡20のトランスミッタ21が発する信号を3DグラスセンサR112よりも強く検知した場合に、眼鏡20を検知した3DグラスセンサL111に最も近い表示画面600の辺である辺v1-v2を、観察者Aの観察位置に最も近い表示画面600の辺として判別する。 Specifically, as illustrated in FIG. 7, the determination unit 110 detects a signal generated by the transmitter 21 of the glasses 20 when the 3D glass sensor L111 detects a signal generated by the transmitter 21 of the glasses 20 or a signal generated by the transmitter 21 of the glasses 20. If the detection is stronger than that, the side v1-v2 that is the side of the display screen 600 closest to the 3D glass sensor L111 that has detected the glasses 20 is determined as the side of the display screen 600 that is closest to the observation position of the observer A. .
 また、判別部110は、3DグラスセンサR112が眼鏡30のトランスミッタ21が発する信号を検知した場合に、あるいは眼鏡30のトランスミッタ31が発する信号を3DグラスセンサL111よりも強く検知した場合に、眼鏡30を検知した3DグラスセンサR112に最も近い表示画面600の辺である辺v3-v4を、観察者Bの観察位置に最も近い表示画面600の辺として判別する。 In addition, when the 3D glass sensor R112 detects a signal emitted from the transmitter 21 of the glasses 30 or when the signal emitted from the transmitter 31 of the glasses 30 is detected more strongly than the 3D glass sensor L111, the determination unit 110 detects the glasses 30. The side v3-v4 that is the side of the display screen 600 that is closest to the 3D glass sensor R112 that has detected is determined as the side of the display screen 600 that is closest to the observation position of the viewer B.
 以上により、判別部110が表示画面600の辺を判別する処理(図5のS102)は、終了する。 Thus, the process of determining the sides of the display screen 600 by the determination unit 110 (S102 in FIG. 5) ends.
 次に、映像回転部130が回転映像を生成する処理(図5のS104)の詳細について、説明する。 Next, details of the process (S104 in FIG. 5) in which the video rotation unit 130 generates a rotated video will be described.
 図8は、本発明の実施の形態1に係る映像回転部130が回転映像を生成する処理を説明するためのフローチャートである。 FIG. 8 is a flowchart for explaining a process in which the video rotation unit 130 according to Embodiment 1 of the present invention generates a rotated video.
 図9~図13Bは、本発明の実施の形態1に係る映像回転部130が回転映像を生成する処理を説明するための図である。 FIG. 9 to FIG. 13B are diagrams for explaining processing for generating a rotated image by the image rotation unit 130 according to Embodiment 1 of the present invention.
 まず、図8に示すように、映像回転部130の映像入出力部131は、入力映像として、左眼用映像と右眼用映像とを入力する(S302)。 First, as shown in FIG. 8, the video input / output unit 131 of the video rotation unit 130 inputs a left-eye video and a right-eye video as input video (S302).
 図9~図11Cは、本発明の実施の形態1に係る映像入出力部131が入力する入力映像を説明するための図である。 9 to 11C are diagrams for explaining an input video input by the video input / output unit 131 according to Embodiment 1 of the present invention.
 図9に示すように、観察者Aの左眼用の映像を撮像するカメラAL51と、観察者Aの右眼用の映像を撮像するカメラAR52と、観察者Bの左眼用の映像を撮像するカメラBL53と、観察者Bの右眼用の映像を撮像するカメラBR54とで、物体40を撮像する。 As shown in FIG. 9, a camera AL51 that captures an image for the left eye of the observer A, a camera AR52 that captures an image for the right eye of the observer A, and an image for the left eye of the observer B are captured. The object 40 is imaged by the camera BL53 that performs this operation and the camera BR54 that captures an image for the right eye of the observer B.
 これにより、観察者Aの左眼用の映像として、図10Aに示す映像が撮像され、観察者Aの右眼用の映像として、図10Bに示す映像が撮像される。そして、これらの映像により、観察者Aは、図10Cに示すような立体映像を観察することができる。 As a result, the image shown in FIG. 10A is captured as the image for the left eye of the observer A, and the image shown in FIG. 10B is captured as the image for the right eye of the observer A. With these images, the observer A can observe a stereoscopic image as shown in FIG. 10C.
 また、観察者Bの左眼用の映像として、図11Aに示す映像が撮像され、観察者Bの右眼用の映像として、図11Bに示す映像が撮像される。そして、これらの映像により、観察者Bは、図11Cに示すような立体映像を観察することができる。 11A is captured as an image for the left eye of the observer B, and an image illustrated in FIG. 11B is captured as an image for the right eye of the observer B. With these images, the observer B can observe a stereoscopic image as shown in FIG. 11C.
 図8に戻り、映像回転部130の回転制御部132は、映像制御部120から判別部110が判別した辺を処理対象の辺として取得し、未処理辺があるか否かを判断する(S304)。そして、回転制御部132は、未処理辺があると判断した場合は(S304でYes)、当該未処理辺のうちの1つの辺を選択する(S306)。 Returning to FIG. 8, the rotation control unit 132 of the video rotation unit 130 acquires the side determined by the determination unit 110 from the video control unit 120 as a processing target side, and determines whether there is an unprocessed side (S304). ). When the rotation control unit 132 determines that there is an unprocessed side (Yes in S304), the rotation control unit 132 selects one of the unprocessed sides (S306).
 そして、回転制御部132は、選択した未処理辺の回転角を判別する(S308)。回転制御部132は、選択した未処理辺の回転角が「回転角ゼロ」であると判別した場合は(S308で「回転角ゼロ」)、再度、未処理辺があるか否かを判断する(S304)。 Then, the rotation control unit 132 determines the rotation angle of the selected unprocessed side (S308). When the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “zero rotation angle” (“rotation angle zero” in S308), the rotation control unit 132 determines again whether there is an unprocessed side. (S304).
 また、回転制御部132は、選択した未処理辺の回転角が「回転角180度」であると判別した場合は(S308で「回転角180度」)、入力映像を180度回転させ(S310)、再度、未処理辺があるか否かを判断する(S304)。 When the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “rotation angle 180 degrees” (“rotation angle 180 degrees” in S308), the rotation control unit 132 rotates the input image by 180 degrees (S310). It is determined again whether there is an unprocessed side (S304).
 図12A~図13Bは、本発明の実施の形態1に係る回転制御部132が入力映像を回転させる処理を説明するための図である。 FIGS. 12A to 13B are diagrams for explaining processing in which the rotation control unit 132 according to the first embodiment of the present invention rotates the input video.
 図11A及び図11Bに示したように、観察者B用の入力映像は、辺v1-v2が底辺になるように表示画面600に表示されることとなる。しかし、図7に示したように、観察者Bは、観察者Aの反対側から観察している。 As shown in FIGS. 11A and 11B, the input video for the observer B is displayed on the display screen 600 so that the sides v1-v2 are the bottom sides. However, as shown in FIG. 7, the observer B observes from the opposite side of the observer A.
 このため、図12Aに示すように、観察者Bの左眼用の映像を180度回転させて、辺v3-v4が底辺になるように表示画面600に表示させる必要がある。同様に、図12Bに示すように、観察者Bの右眼用の映像を180度回転させて、辺v3-v4が底辺になるように表示画面600に表示させる必要がある。 For this reason, as shown in FIG. 12A, it is necessary to rotate the image for the left eye of the viewer B by 180 degrees and display it on the display screen 600 so that the sides v3-v4 are the bottom sides. Similarly, as shown in FIG. 12B, it is necessary to rotate the image for the right eye of the viewer B by 180 degrees and display it on the display screen 600 so that the sides v3-v4 are the bottom sides.
 そこで、回転制御部132は、図13Aに示すように、映像中の各画素における画素値を、映像を180度回転させた場合の値に変更することで、図13Bに示すように映像を回転させることができる。 Therefore, as shown in FIG. 13A, the rotation control unit 132 rotates the video as shown in FIG. 13B by changing the pixel value of each pixel in the video to a value when the video is rotated 180 degrees. Can be made.
 具体的には、回転制御部132は、座標(x、y)の画素の画素値を、座標(h-x、v-y)の位置に配置変換する。ここで、x、yはそれぞれ画素の水平、垂直方向の位置であり、h、vはそれぞれ水平、垂直方向の画素数である。 Specifically, the rotation control unit 132 converts the pixel value of the pixel at the coordinates (x, y) to the position of the coordinates (hx, vy). Here, x and y are the horizontal and vertical positions of the pixels, respectively, and h and v are the numbers of pixels in the horizontal and vertical directions, respectively.
 図8に戻り、以上のように、回転制御部132は、未処理辺がないと判断するまで上記処理を繰り返し行い、未処理辺がないと判断した場合は(S304でNo)、処理を終了する。 Returning to FIG. 8, as described above, the rotation control unit 132 repeats the above processing until it is determined that there is no unprocessed side. When it is determined that there is no unprocessed side (No in S304), the process ends. To do.
 以上により、映像回転部130が回転映像を生成する処理(図5のS104)は、終了する。 Thus, the process (S104 in FIG. 5) in which the video rotation unit 130 generates the rotated video ends.
 次に、映像出力部140が回転映像を出力する処理(図5のS106)の詳細について、説明する。 Next, details of the process (S106 in FIG. 5) in which the video output unit 140 outputs the rotated video will be described.
 図14は、本発明の実施の形態1に係る映像出力部140が回転映像を出力する処理を説明するためのフローチャートである。 FIG. 14 is a flowchart for explaining processing in which the video output unit 140 according to Embodiment 1 of the present invention outputs a rotated video.
 図15は、本発明の実施の形態1に係る映像出力部140が回転映像を出力する処理を説明するための図である。 FIG. 15 is a diagram for explaining processing in which the video output unit 140 according to Embodiment 1 of the present invention outputs a rotated video.
 まず、図14に示すように、映像制御部120は、映像出力部140に左眼用の映像を出力させ、表示画面600に表示させる場合に(S402でYes)、当該映像の観察者のシャッタ眼鏡の左眼用シャッタを開き、他の観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを閉じる(S404)。 First, as illustrated in FIG. 14, when the video control unit 120 causes the video output unit 140 to output a left-eye video and display it on the display screen 600 (Yes in S <b> 402), the viewer's shutter for the video is displayed. The shutter for the left eye of the glasses is opened, and the shutter for the left eye and the right eye of the shutter glasses of another observer are closed (S404).
 例えば、図15の(a)に示すように、観察者Aの左眼用の映像が表示画面600に表示される場合に、映像制御部120は、観察者Aの眼鏡20の左眼用シャッタを開き、観察者Bの眼鏡30の左眼用及び右眼用シャッタを閉じる。また、図15の(b)に示すように、観察者Bの左眼用の映像が表示画面600に表示される場合に、映像制御部120は、観察者Bの眼鏡20の左眼用シャッタを開き、観察者Aの眼鏡30の左眼用及び右眼用シャッタを閉じる。 For example, as shown in FIG. 15A, when an image for the left eye of the observer A is displayed on the display screen 600, the image control unit 120 displays the shutter for the left eye of the eyeglass 20 of the observer A. And the shutters for the left eye and right eye of the eyeglasses 30 of the observer B are closed. Further, as shown in FIG. 15B, when an image for the left eye of the viewer B is displayed on the display screen 600, the video control unit 120 displays the shutter for the left eye of the glasses 20 of the viewer B. And the shutters for the left eye and right eye of the eyeglasses 30 of the observer A are closed.
 また、図14に戻り、映像制御部120は、映像出力部140に右眼用の映像を出力させ、表示画面600に表示させる場合に(S402でNo、S406でYes)、当該映像の観察者のシャッタ眼鏡の右眼用シャッタを開き、他の観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを閉じる(S408)。 Returning to FIG. 14, when the video control unit 120 causes the video output unit 140 to output the right-eye video and display it on the display screen 600 (No in S402, Yes in S406), the viewer of the video is observed. The right eye shutter of the shutter glasses is opened, and the left eye shutter and the right eye shutter of the shutter glasses of other observers are closed (S408).
 例えば、図15の(c)に示すように、観察者Aの右眼用の映像が表示画面600に表示される場合に、映像制御部120は、観察者Aの眼鏡20の右眼用シャッタを開き、観察者Bの眼鏡30の左眼用及び右眼用シャッタを閉じる。また、図15の(d)に示すように、観察者Bの右眼用の映像が表示画面600に表示される場合に、映像制御部120は、観察者Bの眼鏡20の右眼用シャッタを開き、観察者Aの眼鏡30の左眼用及び右眼用シャッタを閉じる。 For example, as shown in FIG. 15C, when an image for the right eye of the observer A is displayed on the display screen 600, the image control unit 120 displays the shutter for the right eye of the eyeglass 20 of the observer A. And the shutters for the left eye and right eye of the eyeglasses 30 of the observer B are closed. In addition, as shown in FIG. 15D, when an image for the right eye of the observer B is displayed on the display screen 600, the image control unit 120 displays the shutter for the right eye of the eyeglass 20 of the observer B. And the shutters for the left eye and right eye of the eyeglasses 30 of the observer A are closed.
 このように、映像出力部140は、映像制御部120の指示に従い、観察者A及び観察者B用の左眼用の映像及び右眼用の映像を一定の時間間隔で交互に出力する。 As described above, the video output unit 140 alternately outputs the left-eye video and the right-eye video for the viewer A and the viewer B at regular time intervals in accordance with the instruction of the video control unit 120.
 以上のように、本発明の実施の形態1に係る映像処理装置10によれば、観察者の観察位置に最も近い表示画面600の辺を判別し、判別した辺が底辺となるように、入力映像を回転させた回転映像を生成し、生成した回転映像を出力することで、表示画面600に回転映像を表示させる。これにより、観察者に最も近い表示画面600の辺が底辺となるように、映像が表示画面600に表示される。このため、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができる。 As described above, according to the video processing device 10 according to the first embodiment of the present invention, the side of the display screen 600 closest to the observation position of the observer is determined, and the input is performed so that the determined side becomes the bottom side. A rotation image obtained by rotating the image is generated, and the generated rotation image is output, so that the rotation image is displayed on the display screen 600. Thereby, an image is displayed on the display screen 600 such that the side of the display screen 600 closest to the observer is the bottom side. For this reason, it is possible to observe in the original direction to be observed regardless of the observation direction.
 また、映像処理装置10は、観察者が用いるデバイスが発する信号を検知するセンサを備え、当該センサが当該信号を検知した場合に、当該センサに最も近い表示画面600の辺が底辺となるように、入力映像を回転させた回転映像を生成する。ここで、当該デバイスは観察者が用いる機器であるので、観察者は、当該デバイスを検知したセンサの近くで観察していると考えられる。このため、当該センサに最も近い表示画面600の辺が底辺となるように、映像が表示画面600に表示されることで、当該観察者は、映像本来の観察すべき向きで観察することができる。 In addition, the video processing apparatus 10 includes a sensor that detects a signal generated by a device used by the observer, and when the sensor detects the signal, the side of the display screen 600 that is closest to the sensor is the bottom side. Then, a rotated image obtained by rotating the input image is generated. Here, since the device is a device used by the observer, it is considered that the observer is observing near the sensor that has detected the device. For this reason, the image is displayed on the display screen 600 so that the side of the display screen 600 closest to the sensor is the bottom side, so that the observer can observe the image in the original observation direction. .
 また、映像処理装置10は、立体映像を視聴するための左眼用映像と右眼用映像とを入力し、当該左眼用映像と右眼用映像とを回転させた回転左眼用映像と回転右眼用映像とを生成し、出力する。これにより、観察する方向にかかわらず、映像本来の観察すべき向きで、立体映像を観察することができる。 Further, the video processing apparatus 10 inputs a left-eye video and a right-eye video for viewing a stereoscopic video, and a rotated left-eye video obtained by rotating the left-eye video and the right-eye video. Generate and output a rotating right-eye video. As a result, a stereoscopic image can be observed in the original direction to be observed regardless of the viewing direction.
 また、映像出力部140は、回転左眼用映像と回転右眼用映像とを、一定の時間間隔で交互に出力することで、当該一定の時間間隔で表示画面600に交互に表示させる。これにより、立体映像視聴用の眼鏡を用いて左眼用の映像と右眼用の映像とを交互に視聴することで、映像本来の観察すべき向きで、立体映像を観察することができる。 In addition, the video output unit 140 alternately displays the rotated left-eye video and the rotated right-eye video at regular time intervals, thereby causing the display screen 600 to alternately display the regular left-eye video and the rotated right-eye video at regular time intervals. Accordingly, by viewing the left-eye video and the right-eye video alternately using the stereoscopic video viewing glasses, it is possible to observe the stereoscopic video in the original direction to be observed.
 また、判別部110は、観察者が装着している立体映像視聴用の眼鏡の位置に最も近い表示画面600の辺を判別し、当該辺が底辺となるように、入力映像を回転させた回転映像を生成する。ここで、当該眼鏡は観察者が装着しているので、当該眼鏡の位置が観察者の観察位置となる。このため、当該眼鏡に最も近い表示画面600の辺が底辺となるように、映像が表示画面600に表示されることで、当該観察者は、映像本来の観察すべき向きで観察することができる。 In addition, the determination unit 110 determines the side of the display screen 600 that is closest to the position of the stereoscopic video viewing glasses worn by the observer, and rotates the input video so that the side becomes the bottom side. Generate video. Here, since the eyeglasses are worn by the observer, the position of the eyeglasses becomes the observation position of the observer. For this reason, the video is displayed on the display screen 600 so that the side of the display screen 600 closest to the glasses is the base, so that the observer can observe the video in the original observation direction. .
 また、眼鏡はシャッタ眼鏡であり、映像処理装置10は、判別した辺に近い観察者の左眼用の映像が表示される場合に、当該観察者のシャッタ眼鏡の左眼用シャッタを開いて右眼用シャッタを閉じるとともに、当該観察者以外の観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを閉じる。また、映像処理装置10は、判別した辺に近い観察者の右眼用の映像が表示される場合に、当該観察者のシャッタ眼鏡の右眼用シャッタを開いて左眼用シャッタを閉じるとともに、当該観察者以外の観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを閉じる。これにより、立体映像視聴用のシャッタ眼鏡を用いて左眼用の映像と右眼用の映像とを交互に視聴することで、映像本来の観察すべき向きで、立体映像を観察することができる。 Further, the eyeglasses are shutter eyeglasses, and the video processing device 10 opens the left eye shutter of the observer's shutter glasses when the video for the left eye of the observer close to the determined side is displayed. The eye shutter is closed and the left eye shutter and the right eye shutter of the shutter glasses of an observer other than the observer are closed. The video processing device 10 opens the right-eye shutter of the observer's shutter glasses and closes the left-eye shutter when an image for the right eye of the observer close to the determined side is displayed. The left eye shutter and the right eye shutter of the shutter glasses of the observer other than the observer are closed. Thus, by viewing the left-eye video and the right-eye video alternately using the shutter glasses for stereoscopic video viewing, it is possible to observe the stereoscopic video in the original observation direction. .
 また、映像処理装置10は、長方形の表示画面600を備える。これにより、当該長方形の表示画面600において観察する方向にかかわらず、映像本来の観察すべき向きで、立体映像を観察することができる。 In addition, the video processing apparatus 10 includes a rectangular display screen 600. As a result, regardless of the direction of observation on the rectangular display screen 600, a stereoscopic image can be observed in the original direction to be observed.
 また、映像処理装置10は、長方形の表示画面600を備えるとともに、当該表示画面600の4辺のうちのいずれか2辺の近くに、眼鏡を検知するセンサをそれぞれ備え、当該センサが眼鏡を検知した場合に、眼鏡を検知したセンサに最も近い表示画面600の辺が底辺となるように、入力映像を回転させた回転映像を生成する。ここで、当該眼鏡は観察者が装着しているので、当該2辺の近くに眼鏡が配置され検出されれば、当該眼鏡の位置が観察者の観察位置となる。このため、当該2辺において、当該眼鏡に最も近い表示画面600の辺が底辺となるように、映像が表示画面600に表示されることで、当該観察者は、映像本来の観察すべき向きで観察することができる。 In addition, the video processing apparatus 10 includes a rectangular display screen 600, and includes sensors for detecting glasses near any two of the four sides of the display screen 600, and the sensors detect the glasses. In this case, a rotation image is generated by rotating the input image so that the side of the display screen 600 closest to the sensor that has detected the glasses is the bottom side. Here, since the observer wears the glasses, if the glasses are arranged and detected near the two sides, the position of the glasses becomes the observation position of the observer. For this reason, by displaying the video on the display screen 600 such that the side of the display screen 600 closest to the glasses is the bottom side of the two sides, the observer can observe the original direction of the video. Can be observed.
 また、映像処理装置10は、表示画面600の4辺のうちの向かい合う2辺に対応する観察位置のうちの一方の観察位置における観察者用の入力映像を180度回転させて、回転映像を生成する。つまり、2人の観察者が表示画面600を挟んで向かい合っていれば、2人の観察者が観察する映像は、上下が逆向きになる。このため、いずれか一方の観察者用の入力映像を180度回転させることで、2人の観察者双方ともに、映像本来の観察すべき向きで観察することができる。 In addition, the video processing apparatus 10 rotates the input video for the observer at one of the observation positions corresponding to two opposite sides of the four sides of the display screen 600 by 180 degrees to generate a rotated video. To do. That is, if two observers face each other across the display screen 600, the image observed by the two observers is upside down. For this reason, by rotating the input image for one of the observers by 180 degrees, both of the two observers can observe the original image to be observed.
 また、映像処理装置10は、映像信号を取得し、取得した映像信号を復号し、映像を出力して表示画面600に表示する。このように、映像処理装置10は、映像信号を取得し、処理して表示するモバイル情報端末(タブレット端末)などのシステムとして実現することができる。 In addition, the video processing apparatus 10 acquires a video signal, decodes the acquired video signal, outputs a video, and displays the video on the display screen 600. Thus, the video processing apparatus 10 can be realized as a system such as a mobile information terminal (tablet terminal) that acquires a video signal, processes it, and displays it.
 (実施の形態1の変形例)
 次に、本発明の実施の形態1の変形例について、説明する。上記実施の形態1では、観察者は立体映像を観察したが、本変形例では、観察者は2次元映像を観察する。なお、本変形例における映像処理装置10は、上記実施の形態1における映像処理装置10と同様の構成要素を備えるため、詳細な説明は省略し、以下では、上記実施の形態1と異なる機能を有する映像制御部120について説明する。
(Modification of Embodiment 1)
Next, a modification of the first embodiment of the present invention will be described. In the first embodiment, the observer observes a stereoscopic image, but in this modification, the observer observes a two-dimensional image. Note that the video processing apparatus 10 in the present modification includes the same components as the video processing apparatus 10 in the first embodiment, and thus detailed description thereof will be omitted. Hereinafter, functions different from those in the first embodiment will be described. The video control unit 120 having the above will be described.
 映像制御部120は、判別部110が判別した辺に対応した観察位置における観察者用の映像が表示画面600に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを開くとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じる。 When the video for the observer at the observation position corresponding to the side determined by the determination unit 110 is displayed on the display screen 600, the video control unit 120 displays the left shutter glasses worn by the observer at the observation position. The eye shutter and the right eye shutter are opened, and the left eye shutter and the right eye shutter of the shutter glasses worn by an observer other than the observer at the observation position are closed.
 例えば、観察者A用の映像が表示画面600に表示される場合に、映像制御部120は、観察者Aの眼鏡20の左眼用及び右眼用シャッタを開き、観察者Bの眼鏡30の左眼用及び右眼用シャッタを閉じる。また、観察者B用の映像が表示画面600に表示される場合に、映像制御部120は、観察者Bの眼鏡20の左眼用及び右眼用シャッタを開き、観察者Aの眼鏡30の左眼用及び右眼用シャッタを閉じる。 For example, when an image for the observer A is displayed on the display screen 600, the image control unit 120 opens the left eye shutter and the right eye shutter of the eyeglass 20 of the observer A, and the eyeglass 30 of the observer B Close the left and right eye shutters. When the image for the observer B is displayed on the display screen 600, the image control unit 120 opens the left eye shutter and the right eye shutter of the eyeglass 20 of the observer B, and the eyeglass 30 of the observer A is opened. Close the left and right eye shutters.
 なお、映像制御部120の指示に従い、映像回転部130は、観察者B用の入力映像を回転させ、映像出力部140は、観察者A及び観察者B用の映像を一定の時間間隔で交互に出力する。 In accordance with an instruction from the video control unit 120, the video rotation unit 130 rotates the input video for the viewer B, and the video output unit 140 alternately switches the video for the viewer A and the viewer B at regular time intervals. Output to.
 以上のように、本発明の実施の形態1の変形例に係る映像処理装置10によれば、観察者が装着している眼鏡の位置に最も近い表示画面600の辺に対応した観察位置における観察者用の映像が表示画面600に表示される場合に、当該観察位置における観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを開くとともに、当該観察位置における観察者以外の観察者のシャッタ眼鏡の左眼用及び右眼用シャッタを閉じる。これにより、シャッタ眼鏡を用いて、映像本来の観察すべき向きで、2次元映像を視聴することができる。 As described above, according to the video processing device 10 according to the modification of the first embodiment of the present invention, the observation at the observation position corresponding to the side of the display screen 600 closest to the position of the glasses worn by the observer is performed. When an image for the viewer is displayed on the display screen 600, the shutter glasses for the left eye and the right eye of the shutter glasses of the viewer at the observation position are opened and the shutter glasses of the viewer other than the viewer at the observation position are opened. Close the left-eye and right-eye shutters. As a result, it is possible to view a two-dimensional image using the shutter glasses in a direction that should originally be observed.
 (実施の形態2)
 次に、本発明の実施の形態2について、説明する。上記実施の形態1では、観察者は、表示画面600の4辺のうちの向かい合った2辺のそれぞれから観察することとした。しかし、本実施の形態2では、観察者は、表示画面600の4辺のそれぞれから観察する。
(Embodiment 2)
Next, a second embodiment of the present invention will be described. In the first embodiment, the observer observes from each of two opposite sides of the four sides of the display screen 600. However, in the second embodiment, the observer observes from each of the four sides of the display screen 600.
 図16は、本発明の実施の形態2に係る映像処理装置10aの外観を示す外観図である。 FIG. 16 is an external view showing the external appearance of the video processing apparatus 10a according to Embodiment 2 of the present invention.
 同図に示すように、映像処理装置10aは、上記実施の形態1における3DグラスセンサL111、3DグラスセンサR112、トランスミッタL151及びトランスミッタR152に加え、3DグラスセンサB113、3DグラスセンサT114、トランスミッタB153及びトランスミッタT154を備えている。つまり、映像処理装置10aは、表示画面600の外周縁を形成する4辺のそれぞれに対応する位置に、眼鏡を検知するセンサをそれぞれ備えている。 As shown in the figure, in addition to the 3D glass sensor L111, the 3D glass sensor R112, the transmitter L151, and the transmitter R152 in the first embodiment, the video processing device 10a includes a 3D glass sensor B113, a 3D glass sensor T114, a transmitter B153, and A transmitter T154 is provided. That is, the video processing apparatus 10a includes sensors for detecting glasses at positions corresponding to the four sides forming the outer peripheral edge of the display screen 600, respectively.
 ここで、3DグラスセンサB113及び3DグラスセンサT114は、3DグラスセンサL111または3DグラスセンサR112と同様の機能を有し、トランスミッタB153及びトランスミッタT154は、トランスミッタL151またはトランスミッタR152と同様の機能を有するため、詳細な説明は省略する。 Here, the 3D glass sensor B113 and the 3D glass sensor T114 have the same function as the 3D glass sensor L111 or the 3D glass sensor R112, and the transmitter B153 and the transmitter T154 have the same function as the transmitter L151 or the transmitter R152. Detailed description will be omitted.
 また、眼鏡60は、観察者Cが装着しているシャッタ眼鏡であり、トランスミッタ61とレシーバ62とを備えている。また、眼鏡70は、観察者Dが装着しているシャッタ眼鏡であり、トランスミッタ71とレシーバ72とを備えている。ここで、眼鏡60及び眼鏡70は、眼鏡20または眼鏡30と同様の機能を有するため、詳細な説明は省略する。 The eyeglasses 60 are shutter eyeglasses worn by the observer C, and include a transmitter 61 and a receiver 62. The glasses 70 are shutter glasses worn by the observer D, and include a transmitter 71 and a receiver 72. Here, the spectacles 60 and the spectacles 70 have the same functions as those of the spectacles 20 or the spectacles 30, and thus detailed description thereof is omitted.
 図17は、本発明の実施の形態2に係る立体映像再生部100aの機能構成を示すブロック図である。 FIG. 17 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit 100a according to Embodiment 2 of the present invention.
 同図に示すように、立体映像再生部100aは、3DグラスセンサL111、3DグラスセンサR112、3DグラスセンサB113、3DグラスセンサT114、トランスミッタL151、トランスミッタR152、トランスミッタB153及びトランスミッタT154の他に、判別部110、映像制御部120、映像回転部130a、映像出力部140及び同期信号生成部150を備えている。なお、以下では、上記実施の形態1と異なる機能を有する映像回転部130aについて説明することとし、その他の構成要素は実施の形態1と同様であるため詳細な説明は省略する。 As shown in the figure, the 3D image reproduction unit 100a determines the 3D glass sensor L111, 3D glass sensor R112, 3D glass sensor B113, 3D glass sensor T114, transmitter L151, transmitter R152, transmitter B153, and transmitter T154. 110, a video control unit 120, a video rotation unit 130a, a video output unit 140, and a synchronization signal generation unit 150. In the following, the video rotation unit 130a having a function different from that of the first embodiment will be described, and the other components are the same as those of the first embodiment, and thus detailed description thereof will be omitted.
 図18は、本発明の実施の形態2に係る映像回転部130aの機能構成を示すブロック図である。 FIG. 18 is a block diagram showing a functional configuration of the video rotation unit 130a according to Embodiment 2 of the present invention.
 同図に示すように、映像回転部130aは、上記実施の形態1における映像入出力部131、回転制御部132及びメモリ133の他に、映像加工部134を備えている。 As shown in the figure, the video rotation unit 130a includes a video processing unit 134 in addition to the video input / output unit 131, the rotation control unit 132, and the memory 133 in the first embodiment.
 映像入出力部131は、入力映像として、立体映像を視聴するための左眼用映像と右眼用映像とを入力する。そして、映像入出力部131は、入力した左眼用映像と右眼用映像とをメモリ133に記憶させる。 The video input / output unit 131 inputs a left-eye video and a right-eye video for viewing a stereoscopic video as input video. Then, the video input / output unit 131 stores the input left-eye video and right-eye video in the memory 133.
 回転制御部132は、判別部110が判別した辺が4辺のうちの第一辺の場合、第一辺に対応する観察位置における観察者用の入力映像を180度回転させる。また、回転制御部132は、判別部110が判別した辺が4辺のうちの第二辺の場合、第二辺に対応する観察位置における観察者用の入力映像を左方向に90度回転させる。また、回転制御部132は、判別部110が判別した辺が4辺のうちの第三辺の場合、第三辺に対応する観察位置における観察者用の入力映像を右方向に90度回転させる。 The rotation control unit 132 rotates the input image for the observer by 180 degrees at the observation position corresponding to the first side when the side determined by the determination unit 110 is the first side of the four sides. In addition, when the side determined by the determination unit 110 is the second side among the four sides, the rotation control unit 132 rotates the input image for the observer at the observation position corresponding to the second side by 90 degrees to the left. . In addition, when the side determined by the determination unit 110 is the third side of the four sides, the rotation control unit 132 rotates the input image for the observer at the observation position corresponding to the third side by 90 degrees to the right. .
 具体的には、回転制御部132は、メモリ133から左眼用映像と右眼用映像とを読み出し、左眼用映像と右眼用映像とを回転させた映像をメモリ133に記憶させる。 Specifically, the rotation control unit 132 reads the left-eye video and the right-eye video from the memory 133 and stores the video obtained by rotating the left-eye video and the right-eye video in the memory 133.
 映像加工部134は、回転制御部132が90度回転した入力映像のサイズを表示画面600のサイズに調整して回転映像を生成する。つまり、映像加工部134は、メモリ133から90度回転された映像を読み出してサイズを調整し、メモリ133に記憶させる。 The image processing unit 134 adjusts the size of the input image rotated 90 degrees by the rotation control unit 132 to the size of the display screen 600 and generates a rotated image. That is, the video processing unit 134 reads the video rotated 90 degrees from the memory 133, adjusts the size, and stores the video in the memory 133.
 具体的には、映像加工部134は、90度回転された入力映像のサイズを、映像全体が表示画面600に表示されるように縮小するレターボックス化を行うことで、表示画面600のサイズに調整する。 Specifically, the image processing unit 134 converts the size of the input image rotated by 90 degrees into a letterbox that reduces the entire image so that the entire image is displayed on the display screen 600, thereby reducing the size of the display screen 600. adjust.
 そして、映像入出力部131は、メモリ133からレターボックス化が行われた映像を読み出し、映像出力部140に出力する。 The video input / output unit 131 reads the letterboxed video from the memory 133 and outputs it to the video output unit 140.
 なお、映像回転部130は回転制御部132及び映像加工部134を備えておらず、映像入出力部131がメモリ133から映像を読み出すときに、左眼用映像と右眼用映像とを回転させサイズ調整を行った映像を読み出して、映像出力部140に出力することにしてもよい。 Note that the video rotation unit 130 does not include the rotation control unit 132 and the video processing unit 134, and rotates the left-eye video and the right-eye video when the video input / output unit 131 reads the video from the memory 133. The video whose size has been adjusted may be read out and output to the video output unit 140.
 次に、映像回転部130aが回転映像を生成する処理(図5のS104)の詳細について、説明する。 Next, details of the process (S104 in FIG. 5) in which the video rotation unit 130a generates a rotated video will be described.
 図19は、本発明の実施の形態2に係る映像回転部130aが回転映像を生成する処理を説明するためのフローチャートである。 FIG. 19 is a flowchart for explaining a process in which the video rotation unit 130a according to Embodiment 2 of the present invention generates a rotated video.
 図20は、本発明の実施の形態2に係る映像回転部130aが回転映像を生成する処理を説明するための図である。 FIG. 20 is a diagram for explaining processing in which the video rotation unit 130a according to Embodiment 2 of the present invention generates a rotated video.
 まず、図19に示すように、映像回転部130aの映像入出力部131は、入力映像を入力し(S302)、回転制御部132は、未処理辺があるか否かを判断し(S304)、未処理辺があると判断した場合に(S304でYes)、当該未処理辺のうちの1つの辺を選択する(S306)。なお、これらの処理(S302~S306)は、上記実施の形態1での処理(S302~S306)と同様であるため、詳細な説明は省略する。 First, as shown in FIG. 19, the video input / output unit 131 of the video rotation unit 130a inputs an input video (S302), and the rotation control unit 132 determines whether there is an unprocessed side (S304). If it is determined that there is an unprocessed side (Yes in S304), one of the unprocessed sides is selected (S306). Since these processes (S302 to S306) are the same as the processes (S302 to S306) in the first embodiment, detailed description thereof is omitted.
 そして、回転制御部132は、選択した未処理辺の回転角を判別する(S312)。回転制御部132は、選択した未処理辺の回転角が「回転角ゼロ」であると判別した場合は(S312で「回転角ゼロ」)、再度、未処理辺があるか否かを判断する(S304)。 Then, the rotation control unit 132 determines the rotation angle of the selected unprocessed side (S312). When the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “zero rotation angle” (“rotation angle zero” in S312), the rotation control unit 132 determines again whether there is an unprocessed side. (S304).
 また、回転制御部132は、選択した未処理辺の回転角が「回転角左90度」であると判別した場合は(S312で「回転角左90度」)、入力映像を左方向に90度回転させる(S314)。 In addition, when the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “rotation angle left 90 degrees” (“rotation angle left 90 degrees” in S312), the rotation control unit 132 sets the input image to 90 leftward. (S314).
 そして、映像加工部134は、左方向に90度回転された入力映像のサイズを、レターボックス化により、表示画面600のサイズに調整し(S316)、再度、未処理辺があるか否かを判断する(S304)。 Then, the image processing unit 134 adjusts the size of the input image rotated 90 degrees leftward to the size of the display screen 600 by letterboxing (S316), and again determines whether there is an unprocessed side. Judgment is made (S304).
 具体的には、図20に示すように、同図の(a)の映像が左方向に90度回転されて同図の(b)の映像が生成され、同図の(b)の映像がレターボックス化されて同図の(c)の映像が生成される。 Specifically, as shown in FIG. 20, the video in FIG. 20A is rotated 90 degrees counterclockwise to generate the video in FIG. 20B, and the video in FIG. The letter box is generated to generate the image shown in FIG.
 また、回転制御部132は、選択した未処理辺の回転角が「回転角右90度」であると判別した場合は(S312で「回転角右90度」)、入力映像を右方向に90度回転させる(S318)。 If the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “rotation angle right 90 degrees” (“rotation angle right 90 degrees” in S312), the rotation control unit 132 sets the input image to the right 90. (S318).
 そして、映像加工部134は、右方向に90度回転された入力映像のサイズを、レターボックス化により、表示画面600のサイズに調整し(S320)、再度、未処理辺があるか否かを判断する(S304)。 Then, the video processing unit 134 adjusts the size of the input video rotated 90 degrees rightward to the size of the display screen 600 by letterboxing (S320), and again determines whether there is an unprocessed side. Judgment is made (S304).
 また、回転制御部132は、選択した未処理辺の回転角が「回転角180度」であると判別した場合は(S312で「回転角180度」)、入力映像を180度回転させ(S322)、再度、未処理辺があるか否かを判断する(S304)。 When the rotation control unit 132 determines that the rotation angle of the selected unprocessed side is “rotation angle 180 degrees” (“rotation angle 180 degrees” in S312), the rotation control unit 132 rotates the input image by 180 degrees (S322). It is determined again whether there is an unprocessed side (S304).
 このようにして、回転制御部132は、未処理辺がないと判断するまで上記処理を繰り返し行い、未処理辺がないと判断した場合は(S304でNo)、処理を終了する。 In this way, the rotation control unit 132 repeats the above process until it is determined that there is no unprocessed side. When it is determined that there is no unprocessed side (No in S304), the process is terminated.
 以上により、映像回転部130aが回転映像を生成する処理(図5のS104)は、終了する。 Thus, the process (S104 in FIG. 5) in which the video rotation unit 130a generates the rotated video ends.
 図21は、本発明の実施の形態2に係る映像出力部140が映像を出力する処理を説明するための図である。 FIG. 21 is a diagram for explaining processing in which the video output unit 140 according to Embodiment 2 of the present invention outputs video.
 同図に示すように、映像出力部140は、観察者A、B、C及びDの左眼用の映像である左眼用映像1を順に出力し、その後に、観察者A、B、C及びDの右眼用の映像である右眼用映像1を順に出力する。そして、映像出力部140は、その後に、観察者A、B、C及びDの左眼用の映像である左眼用映像2を順に出力し、その後に、観察者A、B、C及びDの右眼用の映像である右眼用映像2を順に出力する。 As shown in the figure, the video output unit 140 sequentially outputs the left-eye video 1 that is the left-eye video of the viewers A, B, C, and D, and then the viewers A, B, and C And the right eye image 1 which is the right eye image of D and D are sequentially output. The video output unit 140 then sequentially outputs the left-eye video 2 that is the left-eye video of the viewers A, B, C, and D, and then the viewers A, B, C, and D The right-eye video 2 that is the right-eye video is sequentially output.
 なお、表示画面600への表示速度によっては、映像出力部140は、右眼用映像1の次に、左眼用映像2ではなく、左眼用映像3またはそれ以降の左眼用映像を出力することにしてもよい。 Depending on the display speed on the display screen 600, the video output unit 140 outputs the left-eye video 3 or the left-eye video after the right-eye video 1 instead of the left-eye video 2. You may decide to do it.
 以上のように、本発明の実施の形態2に係る映像処理装置10aによれば、長方形の表示画面600と、表示画面600の4辺の近くに眼鏡を検知するセンサをそれぞれ備え、眼鏡を検知したセンサに最も近い表示画面600の辺が当該4辺のうちの第一辺の場合、第一辺の近くの観察位置における観察者用の入力映像を180度回転させ、当該辺が第二辺の場合、第二辺の近くの観察位置における観察者用の入力映像を左方向に90度回転させ、当該辺が第三辺の場合、第三辺の近くの観察位置における観察者用の入力映像を右方向に90度回転させて、回転映像を生成する。つまり、4人の観察者のそれぞれが表示画面600の4辺から観察している場合は、1人用の映像に対して他の3人用の映像を180度と左90度と右90度とに回転させる必要がある。このため、観察者用の入力映像を180度と左90度と右90度とに回転させることで、3人の観察者ともに、映像本来の観察すべき向きで観察することができる。 As described above, according to the video processing device 10a according to the second embodiment of the present invention, the rectangular display screen 600 and the sensors for detecting glasses are provided near the four sides of the display screen 600, and the glasses are detected. If the side of the display screen 600 closest to the sensor is the first of the four sides, the observer's input image at the observation position near the first side is rotated 180 degrees, and the side is the second side. In this case, the input image for the observer at the observation position near the second side is rotated 90 degrees to the left, and when the side is the third side, the input for the observer at the observation position near the third side is performed. The image is rotated 90 degrees to the right to generate a rotated image. In other words, when each of the four observers is observing from the four sides of the display screen 600, the video for one other person is 180 degrees, 90 degrees left, and 90 degrees right. It is necessary to rotate it. Therefore, by rotating the input image for the observer to 180 degrees, 90 degrees to the left, and 90 degrees to the right, all three observers can observe the original image in the direction to be observed.
 また、映像処理装置10aは、90度回転した入力映像のサイズを表示画面600のサイズに調整して回転映像を生成する。つまり、表示画面600が長方形の場合には、映像を90度回転させると、回転後の映像が表示画面600のサイズと異なるサイズになる。このため、回転後の映像を表示画面600のサイズに調整することで、観察者は、映像を観察することができる。 Also, the video processing apparatus 10a adjusts the size of the input video rotated 90 degrees to the size of the display screen 600, and generates a rotated video. That is, when the display screen 600 is rectangular, when the video is rotated 90 degrees, the rotated video has a size different from the size of the display screen 600. For this reason, by adjusting the image after rotation to the size of the display screen 600, the observer can observe the image.
 また、映像処理装置10aは、90度回転した入力映像のサイズを、映像全体が表示画面600に表示されるように縮小するレターボックス化を行う。つまり、回転して表示画面600のサイズと異なるサイズになった映像を、レターボックス化によりサイズ調整することで、観察者は、映像の全体を観察することができる。 Also, the video processing apparatus 10a performs letterboxing that reduces the size of the input video rotated 90 degrees so that the entire video is displayed on the display screen 600. In other words, the viewer can observe the entire video by adjusting the size of the video that has been rotated to a size different from the size of the display screen 600 by letterboxing.
 (実施の形態2の変形例)
 次に、本発明の実施の形態2の変形例について、説明する。上記実施の形態2では、観察者は、表示画面600の4辺のそれぞれから観察することとした。しかし、本実施の形態2の変形例では、観察者は、表示画面600の4辺のうちのいずれか3辺のそれぞれから観察する。
(Modification of Embodiment 2)
Next, a modification of the second embodiment of the present invention will be described. In the second embodiment, the observer observes from each of the four sides of the display screen 600. However, in the modification of the second embodiment, the observer observes from each of any three sides of the four sides of the display screen 600.
 つまり、本変形例に係る映像処理装置は、表示画面600の外周縁を形成する4辺のうちのいずれか3辺のそれぞれに対応する位置に、眼鏡を検知するセンサをそれぞれ備えている。 That is, the video processing apparatus according to this modification includes sensors for detecting glasses at positions corresponding to any three of the four sides that form the outer periphery of the display screen 600.
 そして、映像回転部は、判別部110が判別した辺が3辺のうちの第一辺の場合、第一辺に対応する観察位置における観察者用の入力映像を180度回転させ、判別部が判別した辺が3辺のうちの第二辺の場合、第二辺に対応する観察位置における観察者用の入力映像を90度回転させて、回転映像を生成する。 When the side determined by the determination unit 110 is the first side of the three sides, the video rotation unit rotates the input video for the observer at the observation position corresponding to the first side by 180 degrees, and the determination unit When the determined side is the second side of the three sides, the input video for the observer at the observation position corresponding to the second side is rotated by 90 degrees to generate a rotated video.
 以上のように、本発明の実施の形態2の変形例に係る映像処理装置によれば、3人の観察者のそれぞれが表示画面600の3辺から観察している場合において、観察者用の入力映像を180度と90度回転させることで、3人の観察者ともに、映像本来の観察すべき向きで観察することができる。 As described above, according to the video processing device according to the modification of the second embodiment of the present invention, when each of the three observers is observing from the three sides of the display screen 600, it is for the observer. By rotating the input image by 180 degrees and 90 degrees, all three observers can observe the original image in the direction to be observed.
 (実施の形態3)
 次に、本発明の実施の形態3について、説明する。上記実施の形態1及び2では、映像処理装置は、シャッタ眼鏡を用いて立体映像を観察する方式の装置であった。しかし、本実施の形態3では、映像処理装置は、裸眼レンチキュラ方式の装置である。
(Embodiment 3)
Next, a third embodiment of the present invention will be described. In the first and second embodiments, the video processing apparatus is an apparatus that observes a stereoscopic video using shutter glasses. However, in the third embodiment, the video processing apparatus is a naked eye lenticular system apparatus.
 図22は、本発明の実施の形態3に係る映像処理装置10bの外観を示す外観図である。 FIG. 22 is an external view showing the external appearance of the video processing apparatus 10b according to Embodiment 3 of the present invention.
 同図に示すように、映像処理装置10bは、表示画面600、光学レンズ11、コントローラセンサL111a及びコントローラセンサR112aを備えている。なお、映像処理装置10bは、上記実施の形態1における映像処理装置10の3DグラスセンサL111及び3DグラスセンサR112の代わりにコントローラセンサL111a及びコントローラセンサR112aを備えているが、その他の構成は上記実施の形態1と同様であるため、詳細な説明は省略する。 As shown in the figure, the video processing apparatus 10b includes a display screen 600, an optical lens 11, a controller sensor L111a, and a controller sensor R112a. The video processing apparatus 10b includes a controller sensor L111a and a controller sensor R112a instead of the 3D glass sensor L111 and the 3D glass sensor R112 of the video processing apparatus 10 in the first embodiment, but the other configurations are the same as those in the above embodiment. Since it is the same as that of form 1, detailed description is abbreviate | omitted.
 ここで、コントローラ80は、観察者Aが映像処理装置10bを制御するために用いる制御装置であり、トランスミッタ81と操作ボタン82とを備えている。また、コントローラ90は、観察者Bが映像処理装置10bを制御するために用いる制御装置であり、トランスミッタ91と操作ボタン92とを備えている。 Here, the controller 80 is a control device used by the observer A to control the video processing device 10b, and includes a transmitter 81 and an operation button 82. The controller 90 is a control device used by the observer B to control the video processing device 10b, and includes a transmitter 91 and operation buttons 92.
 なお、コントローラ80及びコントローラ90は、表示画面600に表示される映像選択用のリモコンであってもよいし、ゲーム用のコントローラであってもよい。また、コントローラ80及びコントローラ90は、映像処理装置10bと有線で接続されていてもよい。 The controller 80 and the controller 90 may be a remote controller for selecting a video displayed on the display screen 600 or a controller for a game. Further, the controller 80 and the controller 90 may be connected to the video processing device 10b by wire.
 表示画面600は、入力される映像を表示する長方形の画面である。つまり、表示画面600は、外周縁に4つの辺を有している。 The display screen 600 is a rectangular screen that displays an input video. That is, the display screen 600 has four sides on the outer periphery.
 コントローラセンサL111aおよびコントローラセンサR112aは、それぞれ観察者が用いるコントローラ80もしくは90が発する信号を検知するセンサである。図22に示すように、コントロールセンサL111aの近くにコントローラ80があればコントロールセンサL111aはコントローラ80が発する信号を検知し、コントロールセンサR112aの近くにコントローラ90があればコントロールセンサR112aはコントローラ90が発する信号を検知する。 The controller sensor L111a and the controller sensor R112a are sensors that detect signals generated by the controller 80 or 90 used by the observer. As shown in FIG. 22, if the controller 80 is near the control sensor L111a, the control sensor L111a detects a signal emitted by the controller 80. If the controller 90 is near the control sensor R112a, the controller 90 emits the control sensor R112a. Detect the signal.
 光学レンズ11は、表示画面600の観察側上面に配置される光学レンズである。 The optical lens 11 is an optical lens disposed on the observation side upper surface of the display screen 600.
 図23は、本発明の実施の形態3に係る光学レンズ11の外観を示す外観図である。 FIG. 23 is an external view showing the external appearance of the optical lens 11 according to Embodiment 3 of the present invention.
 同図に示すように、光学レンズ11は、表示画面600の外周縁を形成する4辺のうちの向かい合う2辺のそれぞれの方向(同図の「観察者A方向」及び「観察者B方向」)から、1つの映像が観察されるように映像を屈折させるシート状のレンチキュラレンズ(以下、レンチキュラシートという)である。 As shown in the figure, the optical lens 11 has directions of two opposite sides of the four sides forming the outer peripheral edge of the display screen 600 ("observer A direction" and "observer B direction" in the figure). ) To a sheet-like lenticular lens (hereinafter referred to as a lenticular sheet) that refracts an image so that one image is observed.
 具体的には、光学レンズ11は、表示画面600の複数の領域に分割して表示された合成映像を、表示画面600の1辺の側からは分割前の複数の映像のうちの1つの映像が観察されるように屈折させる。ここで、合成映像とは、映像回転部130が生成した回転映像を含む複数の映像が複数の領域に分割され合成されて得られる合成映像であり、映像出力部140によって出力されて表示画面600に表示される。 Specifically, the optical lens 11 divides and displays a composite image displayed in a plurality of areas of the display screen 600 from one side of the display screen 600 as one image of the plurality of images before the division. Refract so that is observed. Here, the composite video is a composite video obtained by dividing and synthesizing a plurality of videos including the rotated video generated by the video rotation unit 130 into a plurality of areas, and is output by the video output unit 140 and displayed on the display screen 600. Is displayed.
 図24は、本発明の実施の形態3に係る映像処理装置10bが表示画面600に映像を表示させる過程を示す図である。 FIG. 24 is a diagram illustrating a process in which the video processing apparatus 10b according to the third embodiment of the present invention displays a video on the display screen 600.
 まず、判別部110は、コントローラが発する信号をセンサが検知した場合、検知したセンサに最も近い表示画面600の辺を、観察位置に最も近い表示画面600の辺として判別する。例えば、コントローラ80が発する信号をコントローラセンサL111aが検知した場合、判別部110は、コントローラセンサL111aに最も近い表示画面600の辺を判別する。 First, when the sensor detects a signal generated by the controller, the determination unit 110 determines the side of the display screen 600 closest to the detected sensor as the side of the display screen 600 closest to the observation position. For example, when the controller sensor L111a detects a signal generated by the controller 80, the determination unit 110 determines the side of the display screen 600 that is closest to the controller sensor L111a.
 そして映像回転部130は、コントローラを用いる観察者用の映像として、判別部110が判別した辺が底辺となるように、入力映像を回転させた回転映像を生成する。例えば、コントローラ80が発する信号をコントローラセンサL111aが検知した場合、映像回転部130は、コントローラ80を用いる観察者A用の映像として、同図の(a1)に示す入力映像を180度回転させた同図の(b1)に示す回転映像を生成する。なお、同図の(a2)に示す観察者B用の映像は、回転されずに、そのまま同図の(b2)に示す映像として、映像出力部140に入力される。 Then, the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 becomes the bottom side as an image for the observer using the controller. For example, when the controller sensor L111a detects a signal emitted from the controller 80, the video rotation unit 130 rotates the input video shown in (a1) of FIG. The rotated image shown in (b1) of FIG. Note that the image for the viewer B shown in (a2) of the figure is not rotated and is directly input to the video output unit 140 as the video shown in (b2) of the same figure.
 そして、映像出力部140は、同図の(b1)に示す映像と同図の(b2)に示す映像とを合成し、同図の(c)に示す合成映像を生成する。 Then, the video output unit 140 synthesizes the video shown in (b1) of the figure and the video shown in (b2) of the figure, and generates a synthesized video shown in (c) of the figure.
 この合成映像を表示画面600に表示することで、光学レンズ11によって、観察者Aは同図の(d1)に示す映像を観察でき、観察者Bは同図の(d2)に示す映像を観察できる。 By displaying this composite image on the display screen 600, the optical lens 11 allows the observer A to observe the image shown in (d1) of the figure, and the observer B observes the image shown in (d2) of the figure. it can.
 以上のように、本発明の実施の形態3に係る映像処理装置10bによれば、表示画面600と光学レンズ11とを備え、光学レンズ11は、表示画面600の複数の領域に分割して表示された合成映像を、表示画面600の1辺の側からは分割前の複数の映像のうちの1つの映像が観察されるように屈折させる。つまり、光学レンズ11としてレンチキュラレンズを用いることで、表示画面600の1辺の側から1つの映像が観察されるようになるため、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができる。 As described above, the video processing device 10b according to the third embodiment of the present invention includes the display screen 600 and the optical lens 11, and the optical lens 11 is divided into a plurality of areas of the display screen 600 for display. The synthesized image is refracted from one side of the display screen 600 so that one of the plurality of images before division is observed. That is, by using a lenticular lens as the optical lens 11, one image can be observed from one side of the display screen 600, so that the image is observed in the original observation direction regardless of the observation direction. can do.
 また、表示画面600は長方形であり、光学レンズ11は、表示画面600の4辺のうちの向かい合う2辺のそれぞれの方向から複数の映像のうちの1つの映像が観察されるように映像を屈折させる。このため、観察者が当該2辺の方向から観察する場合に、映像本来の観察すべき向きで観察することができる。 In addition, the display screen 600 is rectangular, and the optical lens 11 refracts an image so that one image of a plurality of images can be observed from each of two opposite sides of the four sides of the display screen 600. Let For this reason, when an observer observes from the direction of the two sides, the image can be observed in the original direction to be observed.
 また、映像処理装置10bは、観察者が用いるコントローラが発する信号を検知するセンサを備え、当該コントローラを検知したセンサに最も近い表示画面600の辺が底辺となるように、入力映像を回転させた回転映像を生成する。ここで、当該コントローラは観察者が用いる機器であるので、観察者は、当該コントローラを検知したセンサの近くで観察していると考えられる。このため、当該センサに最も近い表示画面600の辺が底辺となるように、映像が表示画面600に表示されることで、当該観察者は、映像本来の観察すべき向きで観察することができる。 In addition, the video processing apparatus 10b includes a sensor that detects a signal generated by a controller used by the observer, and rotates the input video so that the side of the display screen 600 closest to the sensor that has detected the controller is the bottom side. Generate rotating images. Here, since the said controller is an apparatus which an observer uses, it is thought that the observer is observing near the sensor which detected the said controller. For this reason, the image is displayed on the display screen 600 so that the side of the display screen 600 closest to the sensor is the bottom side, so that the observer can observe the image in the original observation direction. .
 (実施の形態3の変形例1)
 次に、本発明の実施の形態3の変形例1について、説明する。上記実施の形態3では、観察者は2次元映像を観察したが、本変形例1では、観察者は立体映像を観察する。
(Modification 1 of Embodiment 3)
Next, Modification 1 of Embodiment 3 of the present invention will be described. In the third embodiment, the observer observes a two-dimensional image. In the first modification, the observer observes a stereoscopic image.
 図25は、本発明の実施の形態3の変形例1に係る映像処理装置10cの外観を示す外観図である。 FIG. 25 is an external view showing the external appearance of a video processing apparatus 10c according to Modification 1 of Embodiment 3 of the present invention.
 同図に示すように、映像処理装置10cは、上記実施の形態3における映像処理装置10bの光学レンズ11に代えて、光学レンズ12を備えている。なお、映像処理装置10cのその他の構成については、上記実施の形態3における映像処理装置10bの構成と同様であるため、詳細な説明は省略する。 As shown in the figure, the video processing apparatus 10c includes an optical lens 12 instead of the optical lens 11 of the video processing apparatus 10b in the third embodiment. Since the other configuration of the video processing device 10c is the same as the configuration of the video processing device 10b in the third embodiment, detailed description thereof is omitted.
 光学レンズ12は、レンチキュラシートを垂直及び水平方向に2枚重ねた光学レンズである。つまり、光学レンズ12により、観察者Aの左眼用の映像及び右眼用の映像と、観察者Bの左眼用の映像及び右眼用の映像とが観察されるように、映像を屈折させることができる。 The optical lens 12 is an optical lens in which two lenticular sheets are stacked vertically and horizontally. That is, the optical lens 12 refracts the image so that the left eye image and the right eye image of the observer A and the left eye image and the right eye image of the observer B can be observed. Can be made.
 図26は、本発明の実施の形態3の変形例1に係る映像処理装置10cが表示画面600に映像を表示させる過程を示す図である。 FIG. 26 is a diagram illustrating a process in which the video processing apparatus 10c according to the first modification of the third embodiment of the present invention displays a video on the display screen 600.
 例えば、コントローラ80が発する信号をコントローラセンサL111aが検知した場合、映像回転部130は、コントローラ80を用いる観察者Aの右眼用の映像として、同図の(a1)に示す入力映像を180度回転させた同図の(b1)に示す回転映像を生成する。また、同様に、映像回転部130は、観察者Aの左眼用の映像として、同図の(a2)に示す入力映像を180度回転させた同図の(b2)に示す回転映像を生成する。 For example, when the controller sensor L111a detects a signal generated by the controller 80, the video rotation unit 130 converts the input video shown in (a1) of FIG. The rotated image shown in (b1) of FIG. Similarly, the image rotation unit 130 generates a rotation image shown in (b2) of the figure by rotating the input image shown in (a2) of the figure by 180 degrees as an image for the left eye of the observer A. To do.
 なお、同図の(a3)及び(a4)に示す観察者Bの左眼用及び右眼用の映像は、回転されずに、そのまま同図の(b3)及び(b4)に示す映像として、映像出力部140に入力される。 Note that the images for the left eye and the right eye of the observer B shown in (a3) and (a4) in the figure are not rotated and are directly shown as the images shown in (b3) and (b4) in the figure. Input to the video output unit 140.
 そして、映像出力部140は、同図の(b1)~(b4)に示す映像を合成し、同図の(c)に示す合成映像を生成する。この合成映像を表示画面600に表示することで、光学レンズ12によって、観察者A及び観察者Bは、立体映像を観察できる。例えば、卓球などのゲームを表示画面600に表示させ、観察者A及び観察者Bが当該ゲームを立体映像で観察することができる。 Then, the video output unit 140 synthesizes the videos shown in (b1) to (b4) of the figure and generates a synthesized video shown in (c) of the figure. By displaying this synthesized image on the display screen 600, the observer A and the observer B can observe the stereoscopic image by the optical lens 12. For example, a game such as table tennis can be displayed on the display screen 600, and the viewer A and the viewer B can observe the game in a stereoscopic image.
 (実施の形態3の変形例2)
 次に、本発明の実施の形態3の変形例2について、説明する。上記実施の形態3では、観察者は、表示画面600の4辺のうちの向かい合った2辺のそれぞれから観察することとした。しかし、本変形例2では、観察者は、表示画面600の4辺のそれぞれから観察する。
(Modification 2 of Embodiment 3)
Next, a second modification of the third embodiment of the present invention will be described. In the third embodiment, the observer observes from each of two opposite sides of the four sides of the display screen 600. However, in the second modification, the observer observes from each of the four sides of the display screen 600.
 図27は、本発明の実施の形態3の変形例2に係る映像処理装置10dの外観を示す外観図である。 FIG. 27 is an external view showing the external appearance of a video processing apparatus 10d according to Modification 2 of Embodiment 3 of the present invention.
 同図に示すように、映像処理装置10dは、上記実施の形態3におけるコントローラセンサL111a及びコントローラセンサR112aに加え、コントローラセンサB113a及びコントローラセンサT114aを備えている。つまり、映像処理装置10dは、表示画面600の外周縁を形成する4辺のそれぞれに対応する位置に、コントローラを検知するセンサをそれぞれ備えている。 As shown in the figure, the video processing apparatus 10d includes a controller sensor B113a and a controller sensor T114a in addition to the controller sensor L111a and the controller sensor R112a in the third embodiment. That is, the video processing device 10d includes sensors that detect the controller at positions corresponding to the four sides forming the outer peripheral edge of the display screen 600, respectively.
 ここで、コントローラセンサB113a及びコントローラセンサT114aは、コントローラセンサL111aまたはコントローラセンサR112aと同様の機能を有するため、詳細な説明は省略する。 Here, since the controller sensor B113a and the controller sensor T114a have the same functions as the controller sensor L111a or the controller sensor R112a, detailed description thereof is omitted.
 また、コントローラ93は、観察者Cが用いるコントローラであり、コントローラ94は、観察者Dが用いるコントローラである。ここで、コントローラ93及びコントローラ94は、コントローラ80またはコントローラ90と同様の機能を有するため、詳細な説明は省略する。 The controller 93 is a controller used by the observer C, and the controller 94 is a controller used by the observer D. Here, since the controller 93 and the controller 94 have the same functions as the controller 80 or the controller 90, detailed description thereof is omitted.
 また、映像処理装置10dは、上記実施の形態3の変形例1における映像処理装置10cの光学レンズ12と同じ光学レンズを備えている。つまり、光学レンズ12は、表示画面600の外周縁を形成する4辺のそれぞれの方向から複数の映像のうちの1つの映像が観察されるように映像を屈折させるレンチキュラシートである。 Further, the video processing device 10d includes the same optical lens as the optical lens 12 of the video processing device 10c in the first modification of the third embodiment. In other words, the optical lens 12 is a lenticular sheet that refracts an image so that one image of the plurality of images is observed from each of the four sides forming the outer peripheral edge of the display screen 600.
 以上のように、本発明の実施の形態3の変形例2に係る映像処理装置10dによれば、表示画面600は長方形であり、光学レンズ11は、表示画面600の4辺のそれぞれの方向から複数の映像のうちの1つの映像が観察されるように映像を屈折させる。このため、観察者が当該4辺のいずれの方向から観察した場合でも、映像本来の観察すべき向きで観察することができる。 As described above, according to the video processing device 10d according to the second modification of the third embodiment of the present invention, the display screen 600 is rectangular, and the optical lens 11 is viewed from each of the four sides of the display screen 600. The image is refracted so that one of the plurality of images is observed. For this reason, even when the observer observes from any direction of the four sides, the image can be observed in the original observation direction.
 (実施の形態4)
 次に、本発明の実施の形態4について、説明する。上記実施の形態1~3では、映像処理装置は、3Dグラスセンサやコントローラセンサなどのセンサを用いて、観察者の観察位置に最も近い表示画面の辺を判別した。しかし、本実施の形態4では、映像処理装置は、内臓カメラで観察者を撮像することで、当該観察位置に最も近い表示画面の辺を判別する。
(Embodiment 4)
Next, a fourth embodiment of the present invention will be described. In the first to third embodiments, the video processing apparatus determines the side of the display screen closest to the observer's observation position using a sensor such as a 3D glass sensor or a controller sensor. However, in the fourth embodiment, the video processing apparatus determines the side of the display screen that is closest to the observation position by imaging the observer with the built-in camera.
 ここで、本実施の形態4の映像処理装置10eにおいて、上記実施の形態1の映像処理装置10の立体映像再生部100以外は同様の構成を有するため、以下、本実施の形態4における映像処理装置10eが有する立体映像再生部100bについて、詳細に説明する。 Here, since the video processing apparatus 10e of the fourth embodiment has the same configuration except for the stereoscopic video reproduction unit 100 of the video processing apparatus 10 of the first embodiment, the video processing in the fourth embodiment will be described below. The stereoscopic video reproduction unit 100b included in the device 10e will be described in detail.
 図28は、本発明の実施の形態4に係る立体映像再生部100bの機能構成を示すブロック図である。 FIG. 28 is a block diagram showing a functional configuration of the stereoscopic video reproduction unit 100b according to Embodiment 4 of the present invention.
 同図に示すように、立体映像再生部100bは、判別部110、映像制御部120、映像回転部130、映像出力部140、カメラ160及び人物認識部170を備えている。なお、判別部110、映像制御部120、映像回転部130及び映像出力部140は、上記実施の形態1における立体映像再生部100が有する判別部110、映像制御部120、映像回転部130及び映像出力部140と同様の機能を有するため、詳細な説明は省略または簡略化する。 As shown in the figure, the stereoscopic video reproduction unit 100b includes a determination unit 110, a video control unit 120, a video rotation unit 130, a video output unit 140, a camera 160, and a person recognition unit 170. Note that the determination unit 110, the video control unit 120, the video rotation unit 130, and the video output unit 140 are the determination unit 110, the video control unit 120, the video rotation unit 130, and the video included in the stereoscopic video reproduction unit 100 in the first embodiment. Since it has the same function as the output unit 140, detailed description thereof is omitted or simplified.
 カメラ160は、画像を撮像する内蔵カメラである。 The camera 160 is a built-in camera that captures an image.
 人物認識部170は、カメラ160が撮像した画像内に写った人物を認識して観察者として検出する。 The person recognition unit 170 recognizes a person shown in the image captured by the camera 160 and detects it as an observer.
 また、判別部110は、人物認識部170が検出した観察者に最も近い表示画面600の辺を、観察者の観察位置に最も近い表示画面600の辺として判別する。 Further, the determination unit 110 determines the side of the display screen 600 closest to the observer detected by the person recognition unit 170 as the side of the display screen 600 closest to the observer's observation position.
 また、映像回転部130は、人物認識部170が検出した観察者用の映像として、判別部110が判別した辺が底辺となるように、入力映像を回転させた回転映像を生成する。 In addition, the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 becomes the bottom as the image for the observer detected by the person recognition unit 170.
 次に、立体映像再生部100bが表示画面600の辺を判別する処理(図5のS102)の詳細について、説明する。 Next, details of the process (S102 in FIG. 5) in which the stereoscopic video reproduction unit 100b determines the side of the display screen 600 will be described.
 図29は、本発明の実施の形態4に係る立体映像再生部100bが表示画面600の辺を判別する処理を説明するためのフローチャートである。 FIG. 29 is a flowchart for explaining processing in which the stereoscopic video reproduction unit 100b according to Embodiment 4 of the present invention discriminates the sides of the display screen 600.
 図30及び図31は、本発明の実施の形態4に係る立体映像再生部100bが表示画面600の辺を判別する処理を説明するための図である。 30 and 31 are diagrams for explaining processing in which the stereoscopic video reproduction unit 100b according to Embodiment 4 of the present invention determines the sides of the display screen 600.
 まず、図29に示すように、カメラ160は、画像を撮像する(S502)。具体的には、図30に示すように、例えば映像処理装置10eの辺v2-v3の近傍にカメラ160が配置されており、カメラ160が、観察者A及び観察者Bを含む画像を撮像する。ここで、観察者Aは、辺v1-v2の側から観察しており、観察者Bは、辺v3-v4の側から観察していることとする。カメラ160が撮像した画像を図31に示す。 First, as shown in FIG. 29, the camera 160 captures an image (S502). Specifically, as shown in FIG. 30, for example, a camera 160 is disposed in the vicinity of the side v2-v3 of the video processing apparatus 10e, and the camera 160 captures an image including the observer A and the observer B. . Here, it is assumed that the observer A is observing from the side v1-v2 and the observer B is observing from the side v3-v4. An image taken by the camera 160 is shown in FIG.
 図29に戻り、次に、人物認識部170は、カメラ160が撮像した画像内に写った人物を認識して観察者として検出する(S504)。具体的には、図31に示すように、人物認識部170は、カメラ160が撮像した画像内に2人の人物(観測者)が写っていることを認識する。 Returning to FIG. 29, the person recognition unit 170 recognizes a person in the image captured by the camera 160 and detects it as an observer (S504). Specifically, as shown in FIG. 31, the person recognition unit 170 recognizes that two persons (observers) are shown in an image captured by the camera 160.
 図29に戻り、人物認識部170は、隣接辺v3-v4領域内の観察者を識別する(S506)。具体的には、図31に示すように、人物認識部170は、認識した観測者が、カメラ160が撮像した画像の右側の領域である隣接辺v3-v4領域に写っていることを認識する。同図では、人物認識部170は、隣接辺v3-v4領域に写っている観測者Bを認識する。 29, the person recognizing unit 170 identifies the observer in the adjacent side v3-v4 region (S506). Specifically, as shown in FIG. 31, the person recognition unit 170 recognizes that the recognized observer is reflected in the adjacent side v3-v4 region, which is the right region of the image captured by the camera 160. . In the same figure, the person recognizing unit 170 recognizes the observer B shown in the adjacent side v3-v4 region.
 図29に戻り、人物認識部170は、隣接辺v1-v2領域内の観察者を識別する(S508)。具体的には、図31に示すように、人物認識部170は、認識した観測者が、カメラ160が撮像した画像の左側の領域である隣接辺v1-v2領域に写っていることを認識する。同図では、人物認識部170は、隣接辺v1-v2領域に写っている観測者Aを認識する。 29, the person recognizing unit 170 identifies the observer in the adjacent side v1-v2 region (S508). Specifically, as shown in FIG. 31, the person recognition unit 170 recognizes that the recognized observer is reflected in the adjacent side v1-v2 region that is the left region of the image captured by the camera 160. . In the figure, the person recognizing unit 170 recognizes the observer A reflected in the adjacent side v1-v2 region.
 そして、判別部110は、人物認識部170が検出した観察者に最も近い表示画面600の辺を、観察者の観察位置に最も近い表示画面600の辺として判別する(S510)。つまり、判別部110は、観察者Aの観察位置に最も近い表示画面600の辺として辺v1-v2を判別し、観察者Bの観察位置に最も近い表示画面600の辺として辺v3-v4を判別する。 Then, the determination unit 110 determines the side of the display screen 600 closest to the observer detected by the person recognition unit 170 as the side of the display screen 600 closest to the observation position of the observer (S510). That is, the determination unit 110 determines the side v1-v2 as the side of the display screen 600 closest to the observation position of the observer A, and sets the side v3-v4 as the side of the display screen 600 closest to the observation position of the observer B. Determine.
 そして、映像回転部130は、人物認識部170が検出した観察者用の映像として、判別部110が判別した辺が底辺となるように、入力映像を回転させた回転映像を生成し、映像出力部140は、映像回転部130が生成した回転映像を出力する。 Then, the image rotation unit 130 generates a rotation image obtained by rotating the input image so that the side determined by the determination unit 110 becomes the bottom as the image for the observer detected by the person recognition unit 170, and outputs the image. The unit 140 outputs the rotated video generated by the video rotating unit 130.
 以上のように、本発明の実施の形態4に係る映像処理装置10eによれば、カメラ160が撮像した画像内に写った人物を認識して観察者として検出して、検出した観察者に最も近い表示画面600の辺を、観察者の観察位置に最も近い表示画面600の辺として判別し、入力映像を回転させる。つまり、カメラ170を用いて観察者を撮像することで、観察者に最も近い表示画面600の辺が底辺となるように、映像が表示画面600に表示される。このため、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができる。 As described above, according to the video processing device 10e according to the fourth embodiment of the present invention, the person captured in the image captured by the camera 160 is recognized and detected as an observer, and the detected observer is the most. The side of the near display screen 600 is determined as the side of the display screen 600 closest to the observer's observation position, and the input image is rotated. That is, by imaging the observer using the camera 170, the video is displayed on the display screen 600 so that the side of the display screen 600 closest to the observer becomes the bottom side. For this reason, it is possible to observe in the original direction to be observed regardless of the observation direction.
 (実施の形態4の変形例)
 次に、本発明の実施の形態4の変形例について、説明する。上記実施の形態4では、観察者は、表示画面600の4辺のうちの向かい合った2辺のそれぞれから観察することとした。しかし、本変形例では、観察者は、表示画面600の4辺のそれぞれから観察する。
(Modification of Embodiment 4)
Next, a modification of the fourth embodiment of the present invention will be described. In the fourth embodiment, the observer observes from each of two opposite sides of the four sides of the display screen 600. However, in this modification, the observer observes from each of the four sides of the display screen 600.
 ここで、本実施の形態4の変形例における映像処理装置は、上記実施の形態4における映像処理装置10eと同様の構成を有するため、構成の説明は省略する。 Here, since the video processing device in the modification of the fourth embodiment has the same configuration as the video processing device 10e in the fourth embodiment, description of the configuration is omitted.
 図32は、本発明の実施の形態4の変形例に係る立体映像再生部100bが表示画面600の辺を判別する処理を説明するためのフローチャートである。 FIG. 32 is a flowchart for explaining a process in which the stereoscopic video reproduction unit 100b according to the modification of the fourth embodiment of the present invention discriminates the sides of the display screen 600.
 図33及び図34は、本発明の実施の形態4の変形例に係る立体映像再生部100bが表示画面600の辺を判別する処理を説明するための図である。 33 and 34 are diagrams for explaining processing in which the stereoscopic video reproduction unit 100b according to the modification of the fourth embodiment of the present invention discriminates the sides of the display screen 600.
 まず、図32に示すように、カメラ160は、画像を撮像する(S602)。具体的には、図33に示すように、例えば映像処理装置の辺v2-v3の近傍にカメラ160が配置されており、カメラ160が、観察者A、観察者B、観察者C及び観察者Dを含む画像を撮像する。ここで、観察者Aは、辺v1-v2の側から観察しており、観察者Bは、辺v3-v4の側から観察しており、観察者Cは、辺v1-v4の側から観察しており、観察者Dは、辺v2-v3の側から観察していることとする。カメラ160が撮像した画像を図34に示す。 First, as shown in FIG. 32, the camera 160 captures an image (S602). Specifically, as shown in FIG. 33, for example, a camera 160 is disposed in the vicinity of the side v2-v3 of the video processing apparatus, and the camera 160 includes an observer A, an observer B, an observer C, and an observer. An image including D is captured. Here, the observer A observes from the side v1-v2, the observer B observes from the side v3-v4 side, and the observer C observes from the side v1-v4 side. The observer D is observing from the side v2-v3. An image captured by the camera 160 is shown in FIG.
 図32に戻り、次に、人物認識部170は、カメラ160が撮像した画像内に写った人物を認識して観察者として検出する(S604)。具体的には、図34に示すように、人物認識部170は、カメラ160が撮像した画像内に4人の人物(観測者)が写っていることを認識する。 32, next, the person recognizing unit 170 recognizes a person in the image captured by the camera 160 and detects it as an observer (S604). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that four persons (observers) are shown in the image captured by the camera 160.
 図32に戻り、人物認識部170は、隣接辺v2-v3領域内の観察者を識別する(S606)。具体的には、図34に示すように、人物認識部170は、認識した観測者が、カメラ160が撮像した画像の下側の領域である隣接辺v2-v3領域に写っていることを認識する。同図では、人物認識部170は、隣接辺v2-v3領域に写っている観測者Dを認識する。 32, the person recognition unit 170 identifies the observer in the adjacent side v2-v3 region (S606). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that the recognized observer is reflected in the adjacent side v2-v3 area, which is the lower area of the image captured by the camera 160. To do. In the figure, the person recognizing unit 170 recognizes the observer D shown in the adjacent side v2-v3 region.
 図32に戻り、人物認識部170は、隣接辺v3-v4領域内の観察者を識別する(S608)。具体的には、図34に示すように、人物認識部170は、認識した観測者が、カメラ160が撮像した画像の右側の領域である隣接辺v3-v4領域に写っていることを認識する。同図では、人物認識部170は、隣接辺v3-v4領域に写っている観測者Bを認識する。 32, the person recognizing unit 170 identifies an observer in the adjacent side v3-v4 region (S608). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that the recognized observer is reflected in the adjacent side v3-v4 region that is the right region of the image captured by the camera 160. . In the same figure, the person recognizing unit 170 recognizes the observer B shown in the adjacent side v3-v4 region.
 図32に戻り、人物認識部170は、隣接辺v1-v2領域内の観察者を識別する(S610)。具体的には、図34に示すように、人物認識部170は、認識した観測者が、カメラ160が撮像した画像の左側の領域である隣接辺v1-v2領域に写っていることを認識する。同図では、人物認識部170は、隣接辺v1-v2領域に写っている観測者Aを認識する。 32, the person recognizing unit 170 identifies the observer in the adjacent side v1-v2 region (S610). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that the recognized observer is reflected in the adjacent side v1-v2 region that is the left region of the image captured by the camera 160. . In the figure, the person recognizing unit 170 recognizes the observer A reflected in the adjacent side v1-v2 region.
 図32に戻り、人物認識部170は、隣接辺v1-v4領域内の観察者を識別する(S612)。具体的には、図34に示すように、人物認識部170は、認識した観測者が、カメラ160が撮像した画像の上側の領域である隣接辺v1-v4領域に写っていることを認識する。同図では、人物認識部170は、隣接辺v1-v4領域に写っている観測者Cを認識する。 32, the person recognizing unit 170 identifies the observer in the adjacent side v1-v4 region (S612). Specifically, as shown in FIG. 34, the person recognizing unit 170 recognizes that the recognized observer is reflected in the adjacent side v1-v4 area, which is the upper area of the image captured by the camera 160. . In the figure, the person recognizing unit 170 recognizes the observer C shown in the adjacent side v1-v4 region.
 そして、判別部110は、人物認識部170が検出した観察者に最も近い表示画面600の辺を、観察者の観察位置に最も近い表示画面600の辺として判別する(S614)。つまり、判別部110は、観察者Aの観察位置に最も近い表示画面600の辺として辺v1-v2を判別し、観察者Bの観察位置に最も近い表示画面600の辺として辺v3-v4を判別し、観察者Cの観察位置に最も近い表示画面600の辺として辺v1-v4を判別し、観察者Dの観察位置に最も近い表示画面600の辺として辺v2-v3を判別する。 Then, the determination unit 110 determines the side of the display screen 600 closest to the observer detected by the person recognition unit 170 as the side of the display screen 600 closest to the observation position of the observer (S614). That is, the determination unit 110 determines the side v1-v2 as the side of the display screen 600 closest to the observation position of the observer A, and sets the side v3-v4 as the side of the display screen 600 closest to the observation position of the observer B. The side v1-v4 is determined as the side of the display screen 600 closest to the observation position of the observer C, and the side v2-v3 is determined as the side of the display screen 600 closest to the observation position of the observer D.
 以上のように、観測者が4人になっても、判別部110は、観察者の観察位置に最も近い表示画面600の辺を判別することができる。 As described above, even when there are four observers, the determination unit 110 can determine the side of the display screen 600 that is closest to the observation position of the observer.
 なお、図29または図32における人物認識部170が撮像領域内の観察者を識別する処理(図29のS504、図32のS604)は、省略することができる。 In addition, the process (S504 of FIG. 29, S604 of FIG. 32) which the person recognition part 170 in FIG. 29 or FIG. 32 identifies the observer in an imaging area can be abbreviate | omitted.
 以上、本発明の実施の形態及びその変形例に係る映像処理装置について説明したが、本発明は、この実施の形態に限定されるものではない。 As described above, the video processing apparatus according to the embodiment of the present invention and the modifications thereof has been described, but the present invention is not limited to this embodiment.
 つまり、今回開示された実施の形態及びその変形例は全ての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は上記した説明ではなくて請求の範囲によって示され、請求の範囲と均等の意味及び範囲内での全ての変更が含まれることが意図される。 That is, it should be considered that the embodiment and its modification disclosed this time are illustrative and not restrictive in all respects. The scope of the present invention is defined by the terms of the claims, rather than the description above, and is intended to include any modifications within the scope and meaning equivalent to the terms of the claims.
 例えば、本発明に係る映像処理装置は、上記実施の形態及びその変形例における映像処理装置が備える構成要素の全てを備えていなくともよい。図35は、本発明の実施の形態及びその変形例に係る映像処理装置の最小構成を示す図である。つまり、同図に示すように、最小構成の映像処理装置10fは、判別部110、映像回転部130及び映像出力部140を備えていればよい。 For example, the video processing device according to the present invention may not include all of the constituent elements included in the video processing device in the above-described embodiment and its modifications. FIG. 35 is a diagram showing a minimum configuration of a video processing apparatus according to an embodiment of the present invention and a modification thereof. That is, as shown in the figure, the video processing apparatus 10f with the minimum configuration only needs to include the determination unit 110, the video rotation unit 130, and the video output unit 140.
 また、本発明は、このような映像処理装置として実現することができるだけでなく、当該映像処理装置が備える特徴的な処理部の処理をステップとする映像処理方法としても実現することができる。 Further, the present invention can be realized not only as such a video processing apparatus, but also as a video processing method in which processing of a characteristic processing unit included in the video processing apparatus is used as a step.
 また、本発明に係る映像処理装置が備える各処理部は、集積回路であるLSI(Large Scale Integration)として実現されてもよい。例えば、図35に示す映像処理装置10fの構成要素を備える集積回路として実現することができる。 Each processing unit included in the video processing apparatus according to the present invention may be realized as an LSI (Large Scale Integration) that is an integrated circuit. For example, it can be realized as an integrated circuit including the components of the video processing device 10f shown in FIG.
 なお、集積回路が備える各処理部は、個別に1チップ化されても良いし、一部または全てを含むように1チップ化されても良い。ここでは、LSIとしたが、集積度の違いにより、IC、システムLSI、スーパーLSI、ウルトラLSIと呼称されることもある。 Each processing unit included in the integrated circuit may be individually made into one chip, or may be made into one chip so as to include a part or all of them. The name used here is LSI, but it may also be called IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
 また、集積回路化の手法はLSIに限るものではなく、専用回路または汎用プロセッサで実現してもよい。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)や、LSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサを利用しても良い。 Further, the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible. An FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI, or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
 さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積化を行ってもよい。バイオ技術の適応等が可能性としてあり得る。 Furthermore, if integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technology, it is naturally also possible to integrate functional blocks using this technology. There is a possibility of adaptation of biotechnology.
 また、本発明は、映像処理方法に含まれる特徴的な処理をコンピュータに実行させるプログラムとして実現したりすることもできる。そして、そのようなプログラムは、CD-ROM等の記録媒体及びインターネット等の伝送媒体を介して流通させることができるのは言うまでもない。 The present invention can also be realized as a program that causes a computer to execute characteristic processing included in the video processing method. Needless to say, such a program can be distributed via a recording medium such as a CD-ROM and a transmission medium such as the Internet.
 本発明は、観察する方向にかかわらず、映像本来の観察すべき向きで観察することができるタブレット端末などの映像処理装置等に適用できる。 The present invention can be applied to an image processing apparatus such as a tablet terminal that can observe an image in an original direction to be observed regardless of the direction of observation.
   10、10a、10b、10c、10d、10e、10f 映像処理装置
   11、12 光学レンズ
   20、30、60、70 眼鏡
   21、31、61、71 トランスミッタ
   22、32、62、72 レシーバ
   40 物体
   51 カメラAL
   52 カメラAR
   53 カメラBL
   54 カメラBR
   80、90、93、94 コントローラ
   81、91 トランスミッタ
   82、92 操作ボタン
  100、100a、100b 立体映像再生部
  110 判別部
  111 3DグラスセンサL
  111a コントローラセンサL
  112 3DグラスセンサR
  112a コントローラセンサR
  113 3DグラスセンサB
  113a コントローラセンサB
  114 3DグラスセンサT
  114a コントローラセンサT
  120 映像制御部
  130、130a 映像回転部
  131 映像入出力部
  132 回転制御部
  133 メモリ
  134 映像加工部
  140 映像出力部
  150 同期信号生成部
  151 トランスミッタL
  152 トランスミッタR
  153 トランスミッタB
  154 トランスミッタT
  160 カメラ
  170 人物認識部
  200 取得部
  210 ネットワークインタフェース
  220 メモリーカードインタフェース
  230 光ディスクドライブ
  240 チューナ
  300 セレクタ
  400 復号部
  500 オーディオ出力部
  600 表示画面
  700 ユーザ入力部
  800 制御部
  910 不揮発性メモリ
  920 揮発性メモリ
10, 10a, 10b, 10c, 10d, 10e, 10f Image processing device 11, 12 Optical lens 20, 30, 60, 70 Glasses 21, 31, 61, 71 Transmitter 22, 32, 62, 72 Receiver 40 Object 51 Camera AL
52 Camera AR
53 Camera BL
54 Camera BR
80, 90, 93, 94 Controller 81, 91 Transmitter 82, 92 Operation buttons 100, 100a, 100b Stereoscopic image reproduction unit 110 Discrimination unit 111 3D glass sensor L
111a Controller sensor L
112 3D Glass Sensor R
112a Controller sensor R
113 3D Glass Sensor B
113a Controller sensor B
114 3D Glass Sensor T
114a Controller sensor T
120 Video control unit 130, 130a Video rotation unit 131 Video input / output unit 132 Rotation control unit 133 Memory 134 Video processing unit 140 Video output unit 150 Synchronization signal generation unit 151 Transmitter L
152 Transmitter R
153 Transmitter B
154 Transmitter T
DESCRIPTION OF SYMBOLS 160 Camera 170 Person recognition part 200 Acquisition part 210 Network interface 220 Memory card interface 230 Optical disk drive 240 Tuner 300 Selector 400 Decoding part 500 Audio output part 600 Display screen 700 User input part 800 Control part 910 Non-volatile memory 920 Volatile memory

Claims (21)

  1.  表示画面に表示される入力映像を観察者が観察するために前記入力映像の処理を行う映像処理装置であって、
     前記観察者の観察位置に最も近い前記表示画面の辺を判別する判別部と、
     前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する映像回転部と、
     生成された前記回転映像を出力する映像出力部と
     を備える映像処理装置。
    A video processing device that processes the input video for an observer to observe the input video displayed on the display screen,
    A discriminator for discriminating the side of the display screen closest to the observation position of the observer;
    An image rotation unit that generates a rotation image obtained by rotating the input image so that the side determined by the determination unit is a bottom side;
    And a video output unit for outputting the generated rotated video.
  2.  さらに、
     前記観察者が用いるデバイスが発する信号を検知するセンサを備え、
     前記判別部は、前記デバイスが発する信号を前記センサが検知した場合、前記センサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、
     前記映像回転部は、前記デバイスを用いる観察者用の映像として、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する
     請求項1に記載の映像処理装置。
    further,
    A sensor for detecting a signal emitted by a device used by the observer;
    When the sensor detects a signal generated by the device, the determination unit determines the side of the display screen closest to the sensor as the side of the display screen closest to the observation position;
    The image according to claim 1, wherein the image rotation unit generates a rotation image obtained by rotating the input image so that the side determined by the determination unit becomes a base as an image for an observer using the device. Processing equipment.
  3.  前記映像回転部は、前記入力映像として、立体映像を視聴するための左眼用の映像である左眼用映像と右眼用の映像である右眼用映像とを入力し、前記回転映像として、前記左眼用映像と前記右眼用映像とを回転させた回転左眼用映像と回転右眼用映像とを生成し、
     前記映像出力部は、生成された前記回転左眼用映像及び前記回転右眼用映像を、前記観察者用の左眼用の映像及び右眼用の映像として出力する
     請求項1または2に記載の映像処理装置。
    The video rotation unit inputs, as the input video, a left-eye video that is a left-eye video for viewing a stereoscopic video and a right-eye video that is a right-eye video, and the rotated video is used as the rotated video. Generating a rotated left-eye image and a rotated right-eye image obtained by rotating the left-eye image and the right-eye image;
    The said video output part outputs the produced | generated said image for rotation left eyes, and the said image for rotation right eyes as the image for left eyes and the image for right eyes for said observers. Video processing equipment.
  4.  前記映像出力部は、前記回転左眼用映像と前記回転右眼用映像とを、一定の時間間隔で交互に出力することで、当該一定の時間間隔で前記表示画面に交互に表示させる
     請求項3に記載の映像処理装置。
    The video output unit alternately outputs the video for the rotating left eye and the video for the rotating right eye at a constant time interval so that the video is alternately displayed on the display screen at the constant time interval. 4. The video processing apparatus according to 3.
  5.  前記判別部は、前記観察者が装着している前記立体映像の視聴用の眼鏡の位置に最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別する
     請求項3または4に記載の映像処理装置。
    The determination unit determines the side of the display screen that is closest to the position of the glasses for viewing the stereoscopic image worn by the observer as the side of the display screen that is closest to the observation position. Or the video processing apparatus of 4.
  6.  前記眼鏡は、前記表示画面に左眼用の映像と右眼用の映像とが表示される時刻に同期して、左眼用シャッタと右眼用シャッタとを開閉するシャッタ眼鏡であり、
     前記映像処理装置は、さらに、
     複数の観察者が装着している各々のシャッタ眼鏡の左眼用シャッタと右眼用シャッタの開閉を制御する映像制御部を備え、
     前記映像制御部は、
     前記判別部が判別した辺に対応した観察位置における観察者用の左眼用の映像が前記表示画面に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタを開き、当該観察位置における観察者が装着しているシャッタ眼鏡の右眼用シャッタを閉じるとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じ、
     前記判別部が判別した辺に対応した観察位置における観察者用の右眼用の映像が前記表示画面に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の右眼用シャッタを開き、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタを閉じるとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じる
     請求項5に記載の映像処理装置。
    The glasses are shutter glasses that open and close the left-eye shutter and the right-eye shutter in synchronization with the time when the left-eye video and the right-eye video are displayed on the display screen,
    The video processing device further includes:
    A video control unit for controlling the opening and closing of the shutter for the left eye and the shutter for the right eye of each shutter glasses worn by a plurality of observers;
    The video control unit
    When an image for the left eye for the observer at the observation position corresponding to the side determined by the determination unit is displayed on the display screen, for the left eye of the shutter glasses worn by the observer at the observation position The shutter is opened, the right eye shutter of the shutter glasses worn by the observer at the observation position is closed, and the left eye shutter and right shutter of the shutter glasses worn by an observer other than the observer at the observation position Close the eye shutter,
    When an image for the right eye for the observer at the observation position corresponding to the side determined by the determination unit is displayed on the display screen, for the right eye of the shutter glasses worn by the observer at the observation position The shutter is opened, the left eye shutter of the shutter glasses worn by the observer at the observation position is closed, and the left eye shutter and right shutter of the shutter glasses worn by an observer other than the observer at the observation position The image processing apparatus according to claim 5, wherein the eye shutter is closed.
  7.  前記判別部は、前記観察者が装着している眼鏡の位置に最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、
     前記眼鏡は、左眼用シャッタと右眼用シャッタとを開閉するシャッタ眼鏡であり、
     前記映像処理装置は、さらに、
     複数の観察者が装着している各々のシャッタ眼鏡の左眼用シャッタと右眼用シャッタの開閉を制御する映像制御部を備え、
     前記映像制御部は、
     前記判別部が判別した辺に対応した観察位置における観察者用の映像が前記表示画面に表示される場合に、当該観察位置における観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを開くとともに、当該観察位置における観察者以外の観察者が装着しているシャッタ眼鏡の左眼用シャッタ及び右眼用シャッタを閉じる
     請求項1に記載の映像処理装置。
    The determination unit determines the side of the display screen closest to the position of the glasses worn by the observer as the side of the display screen closest to the observation position,
    The glasses are shutter glasses that open and close a left-eye shutter and a right-eye shutter;
    The video processing device further includes:
    A video control unit for controlling the opening and closing of the shutter for the left eye and the shutter for the right eye of each shutter glasses worn by a plurality of observers;
    The video control unit
    When an image for the observer at the observation position corresponding to the side determined by the determination unit is displayed on the display screen, the left eye shutter and the right eye of the shutter glasses worn by the observer at the observation position The video processing apparatus according to claim 1, wherein the shutter for the eye and the shutter for the left eye of the shutter glasses worn by an observer other than the observer at the observation position are closed.
  8.  さらに、
     長方形の前記表示画面を備える
     請求項1~7のいずれか1項に記載の映像処理装置。
    further,
    The video processing apparatus according to any one of claims 1 to 7, comprising the rectangular display screen.
  9.  さらに、
     長方形の前記表示画面を備えるとともに、
     前記表示画面の外周縁を形成する4辺のうちのいずれか2辺のそれぞれに対応する位置に、前記眼鏡を検知するセンサをそれぞれ備え、
     前記判別部は、前記センサが前記眼鏡を検知した場合に、前記眼鏡を検知したセンサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、
     前記映像回転部は、検知された前記眼鏡を装着した観察者用の映像として、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する
     請求項5~7のいずれか1項に記載の映像処理装置。
    further,
    With the rectangular display screen,
    Sensors for detecting the glasses are respectively provided at positions corresponding to any two of the four sides forming the outer periphery of the display screen,
    When the sensor detects the glasses, the determination unit determines the side of the display screen that is closest to the sensor that has detected the glasses as the side of the display screen that is closest to the observation position;
    The image rotation unit generates a rotation image obtained by rotating the input image so that the side determined by the determination unit becomes a bottom side as the detected image for the observer wearing the glasses. 8. The video processing apparatus according to any one of 1 to 7.
  10.  前記2辺は、前記表示画面の外周縁を形成する4辺のうちの向かい合う2辺であり、
     前記映像回転部は、前記2辺に対応する観察位置のうちの一方の観察位置における観察者用の入力映像を180度回転させて前記回転映像を生成する
     請求項9に記載の映像処理装置。
    The two sides are two opposite sides among the four sides forming the outer peripheral edge of the display screen,
    The video processing apparatus according to claim 9, wherein the video rotation unit rotates the input video for an observer at one of the observation positions corresponding to the two sides by 180 degrees to generate the rotated video.
  11.  さらに、
     長方形の前記表示画面を備えるとともに、
     前記表示画面の外周縁を形成する4辺のうちのいずれか3辺のそれぞれに対応する位置に、前記眼鏡を検知するセンサをそれぞれ備え、
     前記判別部は、前記センサが前記眼鏡を検知した場合に、前記眼鏡を検知したセンサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、
     前記映像回転部は、前記判別部が判別した辺が前記3辺のうちの第一辺の場合、前記第一辺に対応する観察位置における観察者用の入力映像を180度回転させ、前記判別部が判別した辺が前記3辺のうちの第二辺の場合、前記第二辺に対応する観察位置における観察者用の入力映像を90度回転させて、前記回転映像を生成する
     請求項5~7のいずれか1項に記載の映像処理装置。
    further,
    With the rectangular display screen,
    Sensors for detecting the glasses are respectively provided at positions corresponding to any three of the four sides forming the outer periphery of the display screen,
    When the sensor detects the glasses, the determination unit determines the side of the display screen that is closest to the sensor that has detected the glasses as the side of the display screen that is closest to the observation position;
    When the side determined by the determination unit is the first side of the three sides, the video rotation unit rotates the input video for the observer at the observation position corresponding to the first side by 180 degrees, and the determination The rotation image is generated by rotating the input image for the observer by 90 degrees at the observation position corresponding to the second side when the side determined by the unit is the second side of the three sides. 8. The video processing apparatus according to any one of 1 to 7.
  12.  前記映像回転部は、前記90度回転された入力映像のサイズを前記表示画面のサイズに調整して前記回転映像を生成する映像加工部を備える
     請求項11に記載の映像処理装置。
    The video processing apparatus according to claim 11, wherein the video rotation unit includes a video processing unit that generates the rotated video by adjusting a size of the input video rotated 90 degrees to a size of the display screen.
  13.  前記映像加工部は、前記90度回転された入力映像のサイズを、映像全体が前記表示画面に表示されるように縮小するレターボックス化を行うことで、前記表示画面のサイズに調整する
     請求項12に記載の映像処理装置。
    The video processing unit adjusts the size of the input video rotated 90 degrees to the size of the display screen by reducing the size of the input video so that the entire video is displayed on the display screen. 12. The video processing apparatus according to 12.
  14.  さらに、
     長方形の前記表示画面を備えるとともに、
     前記表示画面の外周縁を形成する4辺のそれぞれに対応する位置に、前記眼鏡を検知するセンサをそれぞれ備え、
     前記判別部は、前記センサが前記眼鏡を検知した場合に、前記眼鏡を検知したセンサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、
     前記映像回転部は、前記判別部が判別した辺が前記4辺のうちの第一辺の場合、前記第一辺に対応する観察位置における観察者用の入力映像を180度回転させ、前記判別部が判別した辺が前記4辺のうちの第二辺の場合、前記第二辺に対応する観察位置における観察者用の入力映像を左方向に90度回転させ、前記判別部が判別した辺が前記4辺のうちの第三辺の場合、前記第三辺に対応する観察位置における観察者用の入力映像を右方向に90度回転させて、前記回転映像を生成する
     請求項5~7のいずれか1項に記載の映像処理装置。
    further,
    With the rectangular display screen,
    Sensors for detecting the glasses are respectively provided at positions corresponding to the four sides forming the outer peripheral edge of the display screen,
    When the sensor detects the glasses, the determination unit determines the side of the display screen that is closest to the sensor that has detected the glasses as the side of the display screen that is closest to the observation position;
    When the side determined by the determination unit is the first side of the four sides, the video rotation unit rotates the input video for the observer at the observation position corresponding to the first side by 180 degrees, and the determination When the side determined by the unit is the second side of the four sides, the input image for the observer at the observation position corresponding to the second side is rotated 90 degrees to the left, and the side determined by the determination unit Is the third side of the four sides, the input image for the observer at the observation position corresponding to the third side is rotated 90 degrees rightward to generate the rotated image. The video processing apparatus according to any one of the above.
  15.  さらに、
     前記表示画面と、
     前記表示画面の観察側上面に配置される光学レンズとを備え、
     前記映像出力部は、前記映像回転部が生成した回転映像を含む複数の映像が複数の領域に分割され合成されて得られる合成映像を出力し、
     前記光学レンズは、前記表示画面の複数の領域に分割して表示された前記合成映像を、前記表示画面の1辺の側からは分割前の前記複数の映像のうちの1つの映像が観察されるように屈折させる
     請求項1に記載の映像処理装置。
    further,
    The display screen;
    An optical lens disposed on the observation-side upper surface of the display screen,
    The video output unit outputs a composite video obtained by dividing and synthesizing a plurality of videos including the rotated video generated by the video rotation unit, and
    The optical lens is configured to observe the composite image displayed by dividing into a plurality of regions of the display screen, and one image of the plurality of images before the division is observed from one side of the display screen. The image processing apparatus according to claim 1, wherein the image processing apparatus is refracted.
  16.  前記表示画面は長方形であり、
     前記光学レンズは、前記表示画面の外周縁を形成する4辺のうちの向かい合う2辺のそれぞれの方向から前記複数の映像のうちの1つの映像が観察されるように映像を屈折させる
     請求項15に記載の映像処理装置。
    The display screen is rectangular;
    16. The optical lens refracts an image so that one image of the plurality of images is observed from each of two opposing sides of four sides forming an outer peripheral edge of the display screen. The video processing apparatus described in 1.
  17.  前記表示画面は長方形であり、
     前記光学レンズは、前記表示画面の外周縁を形成する4辺のそれぞれの方向から前記複数の映像のうちの1つの映像が観察されるように映像を屈折させる
     請求項15に記載の映像処理装置。
    The display screen is rectangular;
    The video processing apparatus according to claim 15, wherein the optical lens refracts an image so that one image of the plurality of images is observed from directions of four sides forming an outer peripheral edge of the display screen. .
  18.  さらに、
     前記観察者が前記映像処理装置を制御するために用いるコントローラが発する信号を検知するセンサを備え、
     前記判別部は、前記コントローラが発する信号を前記センサが検知した場合、前記センサに最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、
     前記映像回転部は、前記コントローラを用いる観察者用の映像として、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する
     請求項15~17のいずれか1項に記載の映像処理装置。
    further,
    A sensor that detects a signal emitted by a controller used by the observer to control the video processing device;
    When the sensor detects a signal generated by the controller, the determination unit determines the side of the display screen closest to the sensor as the side of the display screen closest to the observation position;
    The image rotation unit generates a rotation image obtained by rotating the input image so that the side determined by the determination unit becomes a base as an image for an observer using the controller. The video processing apparatus according to claim 1.
  19.  さらに、
     画像を撮像するカメラと、
     前記カメラが撮像した画像内に写った人物を認識して観察者として検出する人物認識部とを備え、
     前記判別部は、前記人物認識部が検出した観察者に最も近い前記表示画面の辺を、前記観察位置に最も近い前記表示画面の辺として判別し、
     前記映像回転部は、前記人物認識部が検出した観察者用の映像として、前記判別部が判別した辺が底辺となるように、前記入力映像を回転させた回転映像を生成する
     請求項1に記載の映像処理装置。
    further,
    A camera for capturing images;
    A person recognizing unit for recognizing a person reflected in an image captured by the camera and detecting it as an observer;
    The determination unit determines the side of the display screen closest to the observer detected by the person recognition unit as the side of the display screen closest to the observation position,
    The image rotation unit generates a rotation image obtained by rotating the input image so that the side determined by the determination unit becomes a bottom side as an image for the observer detected by the person recognition unit. The video processing apparatus described.
  20.  さらに、
     ネットワークインタフェース、メモリーカードインタフェース、光ディスクドライブ及びチューナのうち少なくとも1つを有し、映像信号を取得する取得部と、
     取得された前記映像信号を復号する復号部と、
     前記復号部が復号し、前記映像出力部が出力した映像を表示する表示画面とを備える
     請求項1~19のいずれか1項に記載の映像処理装置。
    further,
    An acquisition unit that has at least one of a network interface, a memory card interface, an optical disk drive, and a tuner, and acquires a video signal;
    A decoding unit for decoding the acquired video signal;
    The video processing apparatus according to any one of claims 1 to 19, further comprising a display screen that displays the video decoded by the decoding unit and output from the video output unit.
  21.  表示画面に表示される入力映像を観察者が観察するために前記入力映像の処理を行う映像処理方法であって、
     前記観察者の観察位置に最も近い前記表示画面の辺を判別する判別ステップと、
     前記判別ステップで判別された辺が底辺となるように、前記入力映像を回転させた回転映像を生成する映像回転ステップと、
     生成された前記回転映像を出力する映像出力ステップと
     を含む映像処理方法。
    A video processing method for processing the input video in order for an observer to observe the input video displayed on the display screen,
    A determination step of determining a side of the display screen closest to the observation position of the observer;
    An image rotation step for generating a rotation image obtained by rotating the input image so that the side determined in the determination step is a bottom side;
    And a video output step of outputting the generated rotated video.
PCT/JP2012/002714 2011-06-27 2012-04-19 Video processing device and video processing method WO2013001697A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011142205 2011-06-27
JP2011-142205 2011-06-27

Publications (1)

Publication Number Publication Date
WO2013001697A1 true WO2013001697A1 (en) 2013-01-03

Family

ID=47423630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/002714 WO2013001697A1 (en) 2011-06-27 2012-04-19 Video processing device and video processing method

Country Status (1)

Country Link
WO (1) WO2013001697A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07168529A (en) * 1993-12-14 1995-07-04 Hitachi Ltd Portable information processor
JPH10150608A (en) * 1996-11-19 1998-06-02 Sanyo Electric Co Ltd User-side terminal of digital broadcasting system
JPH11331876A (en) * 1998-05-11 1999-11-30 Ricoh Co Ltd Multi-image display device
JPH11341518A (en) * 1998-05-26 1999-12-10 Nippon Telegr & Teleph Corp <Ntt> Multi-viewpoint simultaneous observation type horizontal layout stereoscopic image display system
JP2006309178A (en) * 2005-03-28 2006-11-09 Toshiba Corp Image display apparatus
JP2007025862A (en) * 2005-07-13 2007-02-01 Sony Computer Entertainment Inc Image processor
JP2007212664A (en) * 2006-02-08 2007-08-23 Funai Electric Co Ltd Liquid crystal display device
JP2011003992A (en) * 2009-06-16 2011-01-06 Canon Inc 3d video display device and control method for the 3d video display device
JP2011049630A (en) * 2009-08-25 2011-03-10 Canon Inc 3d image processing apparatus and control method thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07168529A (en) * 1993-12-14 1995-07-04 Hitachi Ltd Portable information processor
JPH10150608A (en) * 1996-11-19 1998-06-02 Sanyo Electric Co Ltd User-side terminal of digital broadcasting system
JPH11331876A (en) * 1998-05-11 1999-11-30 Ricoh Co Ltd Multi-image display device
JPH11341518A (en) * 1998-05-26 1999-12-10 Nippon Telegr & Teleph Corp <Ntt> Multi-viewpoint simultaneous observation type horizontal layout stereoscopic image display system
JP2006309178A (en) * 2005-03-28 2006-11-09 Toshiba Corp Image display apparatus
JP2007025862A (en) * 2005-07-13 2007-02-01 Sony Computer Entertainment Inc Image processor
JP2007212664A (en) * 2006-02-08 2007-08-23 Funai Electric Co Ltd Liquid crystal display device
JP2011003992A (en) * 2009-06-16 2011-01-06 Canon Inc 3d video display device and control method for the 3d video display device
JP2011049630A (en) * 2009-08-25 2011-03-10 Canon Inc 3d image processing apparatus and control method thereof

Similar Documents

Publication Publication Date Title
US8885020B2 (en) Video reproduction apparatus and video reproduction method
US10009603B2 (en) Method and system for adaptive viewport for a mobile device based on viewing angle
CN103270759B (en) For zero disparity plane of the 3 D video based on feedback
US20120236133A1 (en) Producing enhanced images from anaglyph images
US8477181B2 (en) Video processing apparatus and video processing method
KR20120016408A (en) Method for processing image of display system outputting 3 dimensional contents and display system enabling of the method
JP5343156B1 (en) DETECTING DEVICE, DETECTING METHOD, AND VIDEO DISPLAY DEVICE
CA2933704A1 (en) Systems and methods for producing panoramic and stereoscopic videos
JP5129376B1 (en) Video processing apparatus and video processing method
JP5134714B1 (en) Video processing device
US9060162B2 (en) Providing multiple viewer preferences on a display device
KR20130106001A (en) Apparatus for processing a three-dimensional image and method for adjusting location of sweet spot for viewing multi-view image
EP2424259A2 (en) Stereoscopic video display system with 2D/3D shutter glasses
JP5132804B1 (en) Video processing apparatus and video processing method
JP4806082B2 (en) Electronic apparatus and image output method
WO2013001697A1 (en) Video processing device and video processing method
JP5433763B2 (en) Video processing apparatus and video processing method
JP2013009127A (en) Image display unit and image display method
KR101728724B1 (en) Method for displaying image and image display device thereof
CN102378016A (en) Method for playing corresponding stereoscopic images according to different viewing angles and system for processing stereoscopic images
CN107347165A (en) Multi-channel video display methods, device and electronic equipment
US20240121373A1 (en) Image display method and 3d display system
US8964005B2 (en) Apparatus and method for displaying obliquely positioned thumbnails on a 3D image display apparatus
JP2000059820A (en) Video camera for stereoscopic photographing by three- camera system
TW202416709A (en) Image display method and 3d display system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12803625

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12803625

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP