WO2018083737A1 - Display device and remote operation controller - Google Patents

Display device and remote operation controller Download PDF

Info

Publication number
WO2018083737A1
WO2018083737A1 PCT/JP2016/082467 JP2016082467W WO2018083737A1 WO 2018083737 A1 WO2018083737 A1 WO 2018083737A1 JP 2016082467 W JP2016082467 W JP 2016082467W WO 2018083737 A1 WO2018083737 A1 WO 2018083737A1
Authority
WO
WIPO (PCT)
Prior art keywords
display device
point
user
screen
finger
Prior art date
Application number
PCT/JP2016/082467
Other languages
French (fr)
Japanese (ja)
Inventor
吉澤 和彦
橋本 康宣
清水 宏
具徳 野村
岡本 周幸
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2016/082467 priority Critical patent/WO2018083737A1/en
Publication of WO2018083737A1 publication Critical patent/WO2018083737A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to a technology for a display device, and relates to a technology for controlling a user operation on the display device.
  • Electronic devices such as display devices including television receivers and projectors have an operation input panel, a remote control device, a graphical user interface (GUI) for a screen, and the like as man-machine interfaces.
  • GUI graphical user interface
  • Various operations and the like can be instructed to the electronic apparatus by user input operations through these operation input means.
  • a technology has been developed as an operation input means for enabling an intuitive and smooth operation as compared with the operation using a conventional remote control device.
  • the technique etc. which operate an electronic device with a voice recognition function are mentioned.
  • the operation using the remote control device is still common. The reason for this is that it can be used easily, can achieve stable operation, and can be manufactured with inexpensive parts.
  • a technique has been proposed in which a widely used camera is applied and used as a remote operation means for a GUI of a display device screen.
  • a system that detects a gesture caused by a predetermined movement of a user using a camera provided in the vicinity of the display device, and controls the operation of the display device using the gesture as an operation instruction by remote operation.
  • a user at a position away from the screen performs a predetermined gesture such as shaking his / her hand from side to side.
  • the system detects the gesture based on the analysis of the camera-captured image, and gives an operation instruction corresponding to the gesture.
  • Patent Document 1 Japanese Patent Laid-Open No. 8-351541 is cited as a prior art example that realizes remote control of a display device or the like using a camera.
  • Patent Document 1 describes the following as a gesture recognition system. The system uses a camera to detect the presence of a predetermined hand gesture and a position in the space to generate a gesture signal, and based on the gesture signal, the on-state corresponding to the position in the space where the hand gesture is detected. Display the hand icon at the position on the screen. The system generates a machine control signal when the hand icon is moved over the machine control icon according to the movement of the hand gesture.
  • a plurality of gestures defined by a predetermined movement of a user's finger are provided, and an operation instruction or the like is associated with each gesture.
  • the user in order to use a remote operation, the user needs to memorize a plurality of gestures including the correspondence between gestures and operation instructions, and requires a large amount of learning. A certain level of proficiency is also required to prevent misoperation.
  • the necessity for learning or the like in such a system is considerably disadvantageous as compared with the convenience and convenience of the remote control device.
  • the prior art has problems in terms of user convenience and has room for improvement.
  • An object of the present invention is to provide a technology that can reduce the amount of learning required and improve the usability of the user with respect to the technology for controlling the user's remote operation on the screen of the display device.
  • a representative embodiment of the present invention is a display device and a remote control device, and has the following configuration.
  • a display device is a display device having a function of controlling a user's remote operation on a screen of the display device, and includes at least two cameras that capture a range including a user who views the screen.
  • the position of the second point of the user's finger is detected, and within the space connecting the screen and the first point, from the user to a position of a predetermined length in the viewing direction from the first point to the screen.
  • the virtual surface space is set so as to overlap with the screen when viewed, and the degree of the finger entering the virtual surface space including the distance between the position of the virtual surface space and the position of the second point is calculated.
  • the approach Based on the degree, a predetermined remote operation including a touch operation of the finger with respect to the virtual surface space is determined, and the position of the second point or the position in the screen associated with the position of the second point Operation input information including coordinates and operation information representing the predetermined remote operation is generated, and the operation of the display device is controlled by the operation input information.
  • the amount of learning necessary for a technique for controlling the user's remote operation on the screen of the display device is small, and the user's usability can be improved.
  • FIG. 1 is a perspective view showing a configuration of a display system including a display device according to a first embodiment of the present invention.
  • 1 is a diagram illustrating a functional block configuration of a display system including a display device according to Embodiment 1.
  • FIG. 1 it is a figure which shows the state which looked at the space from the side.
  • Embodiment 1 it is explanatory drawing which shows the details, such as a virtual surface space.
  • Embodiment 1 it is a figure which shows the state which looked at the space from the top.
  • Embodiment 1 it is a figure which shows the case where a viewing-and-listening position has an angle with respect to the screen center.
  • Embodiment 1 it is a figure which shows the overlap of the screen and virtual surface in a user's visual field. In Embodiment 1, it is a figure shown about the distance measurement based on binocular parallax. In Embodiment 1, it is a figure shown about the setting of the user side reference point based on the analysis of a camera picked-up image. In Embodiment 1, it is a figure which shows the example of a screen display control according to a finger position.
  • FIG. 10 is a diagram illustrating an example of cursor display control in the first embodiment.
  • FIG. 10 is a diagram illustrating an example of cursor display control in the first embodiment.
  • FIG. 11 is a diagram illustrating an example of camera arrangement and posture in a first modification of the first embodiment. It is a figure shown about virtual surface correction in the 1st modification. It is a figure which shows the single virtual surface of the space in the 2nd modification of Embodiment 1, and the example of display control.
  • FIG. 10 is a diagram illustrating an example of virtual surface adjustment in the third modification example of the first embodiment. It is a figure which shows the functional block structure of a display system including the remote control device of Embodiment 2 of this invention.
  • FIG. 6 is a diagram illustrating a configuration of a display device according to Embodiment 3.
  • FIG. FIG. 10 is a diagram showing a first processing flow of the display device according to the third embodiment.
  • FIG. 10 is a diagram showing a second processing flow of the display device according to the third embodiment.
  • It is a figure which shows the structure of the display system containing the display apparatus of Embodiment 4 of this invention.
  • it is a figure which shows the state which looked at the space from the camera.
  • it is a figure which shows the structure of the display system containing the display apparatus of Embodiment 5 of this invention.
  • the display device according to the first embodiment of the present invention will be described with reference to FIGS.
  • the display device of the first embodiment is a display device having a remote operation control function that enables remote operation input to the screen.
  • FIG. 1 shows a configuration of a display system including the display device of the first embodiment.
  • a state in which the user is looking at the screen 10 of the display device 1 is schematically shown in perspective.
  • it has (X, Y, Z) as an explanatory direction and a coordinate system, the X direction is the first direction, the Y direction is the second direction, and the Z direction is the third direction.
  • the X direction and the Y direction are two orthogonal directions constituting the screen 10, where the X direction is the horizontal direction within the screen and the Y direction is the vertical direction within the screen.
  • the Z direction is a direction perpendicular to the X direction and the Y direction of the screen 10.
  • the user is a viewer who views the video on the screen 10 of the display device 1 and is an operator who performs a remote operation on the screen 10.
  • a user-side reference point corresponding to the user's viewpoint is indicated by a point P0.
  • the point P0 is one point of the face or head, for example, the middle point between both eyes.
  • a case where the user is facing the screen 10 as the viewing posture and the viewing direction is along the Z direction is shown.
  • the position of the finger at the time of remote operation by the user is indicated by a point F0.
  • Point F0 is the tip point of the finger that is closest to the screen 10 in the Z direction, for example.
  • the display device 1 is, for example, a television receiver, and has a function of receiving broadcast waves and reproducing and displaying broadcast programs, a function of reproducing and displaying video based on digital input data, and the like.
  • the screen 10 has a rectangular plane, and a central point is a point Q0 and four corner points are points Q1 to Q4.
  • the middle point on the right side is point Q5, the middle point on the left side is point Q6, the middle point on the upper side is point Q7, and the middle point on the lower side is point Q8.
  • the display device 1 includes a camera 21 and a camera 22 which are two cameras.
  • the two cameras are arranged on the left and right positions with respect to the axis connecting the points Q7 and Q8.
  • the display device 1 has a function of performing distance measurement based on binocular parallax related to a target object using captured images of two cameras.
  • the target object is the user's eyes and fingers.
  • the camera 21 is a right camera and is disposed at a point Q5 in the middle of the right side of the screen 10.
  • the camera 22 is a left camera and is disposed at a point Q6 in the middle of the left side of the screen 10.
  • the two cameras are arranged in an orientation for photographing a predetermined range including the user in front of the screen 10, and the orientations can be adjusted.
  • the user is included in the moving image or still image of the video captured by the camera.
  • the camera side reference point in the two cameras is a point Q0 that is an intermediate point between the points Q5 and Q6.
  • the display device 1 detects the distance and position from the target object by distance measurement based on binocular parallax in the analysis processing of the captured images of the two cameras.
  • a known principle can be applied to distance measurement based on binocular parallax. This distance is the distance between the point Q0 that is the camera side reference point and the point P0 that is the user side reference point, or the distance between the point Q0 and the finger point F0. From these distances, the positions of the points P0 and F0 are obtained as the position coordinates of the coordinate system (X, Y, Z) in the three-dimensional space.
  • the display device 1 detects the positions of the point P0 and the point F0 for each time point.
  • the point P0 which is the user side reference point is set as one point of the head or face, for example, the middle point between both eyes.
  • the middle point of both eyes is represented as a point P0 which is a user side reference point.
  • the space 100 When the user views the screen 10, the space 100 has a quadrangular pyramid shape with the four points Q1 to Q4 on the screen 10 as the bottom surface and the point P0 as the user side reference point as the apex.
  • the display device 1 grasps the space 100 formed between the point P0 and the screen 10 based on the detection of the position of the point P0.
  • a straight line connecting the point P0 and the point Q0 is indicated as a reference axis J0.
  • the direction of the reference axis J0 corresponds to the viewing direction when the user views the point Q0 on the screen 10.
  • the display device 1 sets a specific virtual plane in the space 100 based on the grasp of the point P0 and the space 100. This virtual surface is a virtual surface that accepts a user's remote operation on the screen 10.
  • a space between the first virtual surface 201 and the second virtual surface 202 in the space 100 is set as a second space 102 that is a virtual surface space.
  • a virtual surface reference point is provided at a position of a predetermined length from the point P0 on the reference axis J0 in the direction of the point Q0, and the first virtual surface 201 and the second virtual surface 202 are set as two virtual surfaces.
  • the virtual plane space is a substantially flat space having a predetermined thickness.
  • a quadrangular pyramid-shaped space from the point P0 to the first virtual surface 201 is a first space 101
  • a virtual surface space is a second space 102
  • a space from the second virtual surface 202 to the screen 10 is a third space.
  • a space 103 is assumed.
  • the virtual plane is directly overlapped on the screen 10.
  • the processor of the display device 1 grasps such a space 100 and the virtual plane space with the position coordinates of the three-dimensional space by calculation.
  • the display device 1 determines and detects a predetermined operation by the finger based on the positional relationship between the finger point F0 and the virtual surface space and the degree of the finger entering the virtual surface space. In the viewing direction, the degree of entry of the finger point F0 with respect to the virtual plane is detected as, for example, the distance between the position of the point F0 and the position of the second virtual plane 202.
  • the predetermined operation is, for example, an operation such as touching, tapping, and swiping on the second virtual surface 202.
  • the display device 1 detects the position of a finger and a predetermined operation with respect to the virtual surface space, and generates operation input information representing the detected position and the predetermined operation.
  • the display device 1 controls the operation of the display device 1 and the GUI of the screen 10 using the operation input information. For example, when a touch operation is performed on a GUI object on the screen 10, for example, a selection button, the display device 1 performs a corresponding process according to the touch operation on the object, for example, a selection selection process for the option.
  • the user is viewing the video on the screen 10 in a standard posture.
  • the intention of the user's operation on the GUI or the like of the screen 10 is remotely controlled from the movement of the user's fingers in this space 100, particularly the movement in the second space 102 which is a virtual plane space. Detect as.
  • the display device 1 When the fingertip reaches the first virtual surface 201, the display device 1 automatically shifts to a state of accepting a remote operation, and displays a unique cursor on the screen 10.
  • the cursor is a pointer image representing the presence of the user's finger in the virtual plane space, and is information for feeding back the remote operation state to the user.
  • the position on the XY plane in the virtual plane space and the position on the XY plane such as the GUI of the screen 10 are managed in association with each other.
  • the cursor is displayed at a position such as a GUI on the screen 10 associated with the position of the finger in the virtual plane space.
  • FIG. 2 shows a functional block configuration of the display device 1 of the display system of FIG.
  • the display device 1 includes a control unit 11, a storage unit 12, a content display unit 13, a GUI display unit 14, a display drive circuit 15, a screen 10, and a remote operation control unit 20.
  • the remote operation control unit 20 includes a camera 21, a camera 22, a binocular parallax calculation unit 23, a virtual surface operation determination unit 24, an operation input information output unit 25, an individual recognition unit 26, a virtual surface setting unit 27, and a virtual surface adjustment unit 28.
  • the control unit 11 controls the entire display device 1. When a remote operation is performed, the control unit 11 controls the operation of the display device 1 according to the operation input information 301.
  • the storage unit 12 stores control information and content data.
  • the content display unit 13 displays content video on the screen 10 based on the content data.
  • the content can be various types such as a broadcast program, a DVD video, and a material file.
  • the GUI display unit 14 is a part that performs processing for controlling the GUI image display on the screen 10, processing corresponding to regulations according to operations on GUI objects, and the like in the OS or application of the display device 1.
  • the GUI display unit 14 displays a GUI image on the screen 10.
  • the GUI may be a predetermined menu screen or the like.
  • the GUI may be displayed superimposed on the content.
  • the GUI display unit 14 Based on the operation of the object indicated by the operation input information 301, the GUI display unit 14 performs a prescribed corresponding process (for example, a selection determination process) in accordance with an operation (for example, a touch operation) for the object (for example, a selection button).
  • the display state (for example, an effect display indicating that the option button is pressed and selected) is controlled. Further, the GUI display unit 14 controls the display state of the cursor on the GUI using the cursor display control information of the operation input information 301.
  • the display driving circuit 15 generates a video signal based on the input video data from the content display unit 13 or the GUI display unit 14 and displays the video on the screen 10.
  • the user operates the virtual plane space in FIG.
  • the operation is a remote operation on the screen 10.
  • the operation is detected and determined by the remote operation control unit 20 and converted as an operation input to the GUI of the screen 10.
  • Each of the two cameras captures a range including the user's face and fingers and outputs a captured image.
  • the captured video data is input to the binocular parallax calculation unit 23 and the personal recognition unit 24.
  • the binocular parallax calculation unit 23 analyzes the captured video, extracts features such as the face and fingers, and based on the principle of distance measurement based on the binocular parallax, The distance to the point F0 is measured, and the position coordinates of the point P0 and the point F0 are obtained.
  • the binocular parallax calculation unit 23 outputs the viewpoint position information including the obtained position coordinates of the point P0 to the virtual surface setting unit 27, and the finger position information including the obtained position coordinates of the point F0 is operated as a virtual surface operation. Output to the determination unit 24.
  • the virtual plane setting unit 27 sets the virtual plane space, that is, the first virtual plane 201 and the second virtual plane 202 in the space 100 using the viewpoint position information of the user at that time.
  • Virtual surface information representing the virtual surface space set by the virtual surface setting unit 27 is input to the virtual surface operation determination unit 24.
  • the virtual surface operation determination unit 24 determines the positional relationship of the fingers with respect to the virtual surface space, the state of the approach, and a predetermined operation based on the virtual surface information and the finger position information.
  • the virtual surface operation determination unit 24 determines the positional relationship and the entry, for example, whether or not the finger point F0 has entered the depth after the first virtual surface 201, or whether the point F0 has entered the depth after the second virtual surface 202. Determine the degree. Further, the virtual surface operation determination unit 24 determines whether or not a predetermined operation such as a touch operation has been performed based on the positional relationship and the degree of entry.
  • the predetermined operation is an operation on the virtual surface.
  • the predetermined operation is interpreted as an operation on the GUI object on the GUI display unit 14.
  • the virtual surface operation determination unit 24 provides the determination result information to the operation input information output unit 25.
  • the operation input information output unit 25 generates operation input information 301 based on the determination result information, and outputs it to the GUI display unit 14 and the like.
  • the operation input information output unit 25 operates based on the position coordinates of the finger point F0 in the virtual plane space, the position coordinates in the screen 10 associated with the position coordinates, and the determination result of the predetermined operation. Is generated.
  • the operation input information 301 is information including, for example, position coordinate information of the finger point F0 at each time point, operation information indicating a predetermined operation, cursor display control information, and the like.
  • the position coordinate information of the finger ((Xf, Yf, Zf) in FIG. 3) may be the position coordinate information in the screen 10 ((xf, yf) in FIG. 7) by conversion of association.
  • the operation input information 301 includes position coordinate information in the screen 10.
  • the correspondence conversion may be performed not by the operation input information output unit 25 but by the GUI display unit 14.
  • the operation input information 301 includes operation information when a predetermined operation is performed.
  • the operation information is information indicating the type, presence or absence of a predetermined operation such as touch, tap, swipe.
  • the operation input information 301 includes cursor display control information according to the finger position.
  • the cursor display control information is information for controlling the cursor display state on the screen 10 and includes designation information such as whether or not the cursor is displayed and the size.
  • the operation input information 301 is operation input information for a GUI, but may be an operation instruction command or control information for an OS, an application, content, or the like.
  • the personal recognition unit 26 performs processing for recognizing the individual user by analyzing the video captured by the camera. For example, the individual recognizing unit 26 compares and compares the feature extracted from the face image in the captured video with the feature of the already registered face image to determine and identify the individual user.
  • the recognition result information of the personal recognition unit 26 is input to the virtual surface setting unit 27.
  • the personal recognition unit 26 can be applied to a method other than a method for recognizing an individual from a face image, and a method using a user ID input by a user, various biometric authentication methods, or the like may be applied.
  • the method for registering the user's personal face image is as follows.
  • the user selects user setting and face image registration from the menu screen of the display device 1.
  • the display device 1 displays a user setting screen and enters a face image registration mode.
  • the display device 1 displays a message indicating that the face image is registered to the user, and asks the camera to face the camera.
  • the display device 1 shoots a face with a camera, and registers and holds the face image, feature information extracted from the face image, and the like as face information of the individual user. Note that such registration may be omitted, and a face image obtained from a captured image during normal use may be used.
  • the virtual surface setting unit 27 sets a virtual surface according to the individual user.
  • the virtual surface setting unit 27 refers to the virtual surface information of the individual user already set by the virtual surface adjustment unit 28.
  • the virtual plane setting unit 27 sets a virtual plane adjusted according to the user's individual for the finger point F0 of the user's individual, and gives the virtual plane information to the virtual plane operation determination unit 24.
  • the virtual surface adjustment unit 28 performs processing for adjusting a virtual surface suitable for each individual user, in other words, calibration based on the user setting operation.
  • the virtual plane for each user is obtained by adjusting the position of the virtual plane on the reference axis J0 from the standard position to a suitable position before and after.
  • the virtual surface adjustment unit 28 holds virtual surface information after adjustment set for each individual user.
  • the specific adjustment method is as follows.
  • the user selects user setting and virtual plane adjustment from the menu screen of the display device 1.
  • a user setting screen is displayed and the virtual plane adjustment mode is entered.
  • a message to the effect of adjustment is displayed to the user, and his / her finger is placed at a position that the user feels suitable.
  • the display device 1 captures the state with a camera and detects the shape and position of a finger from the captured image.
  • the display device 1 adjusts the position of the virtual surface so as to match the position of the finger, and registers and holds the virtual surface information of the individual user.
  • the virtual surface operation determination unit 24 may be integrated with the GUI display unit 14.
  • the remote operation control unit 20 gives the GUI display unit 14 at least position coordinate information for each point of the finger point F0 as the operation input information 301.
  • the GUI display unit 14 determines a predetermined operation from the operation input information 301.
  • FIG. 3 shows a state in which the space 100 of the display system of FIG. In this state, the space 100 is indicated by a triangle having points Q7, Q8, and P0 as vertices.
  • a second space 102 that is a virtual surface space is set in the space 100.
  • the position coordinates of the point P0 that is the user side reference point are indicated by (Xp, Yp, Zp).
  • the position coordinates of the finger point F0 are indicated by (Xf, Yf, Zf).
  • a half length of the width of the screen 10 in the Y direction is indicated by V1.
  • the distance from the point Q0 that is the camera side reference point to the point P0 that is the user side reference point is indicated by a distance D1.
  • the distance from the point Q0 to the finger point F0 is indicated by a distance D2.
  • the virtual plane is set as a plane that is perpendicular to the viewing direction of the user and parallel to the screen 10.
  • the virtual plane may be set as a plane that is not parallel to the screen 10 according to the viewing direction of the user.
  • a virtual plane shown in FIG. 6 described later is a plane that is perpendicular to the viewing direction of the user and is not parallel to the screen 10.
  • the point P0 that is the user-side reference point is not limited to the middle point between both eyes, and can be set, for example, the center point of the head or face.
  • the virtual plane space is set as a space for enabling a predetermined operation as a remote operation and detecting the predetermined operation.
  • the first virtual surface 201 is particularly a virtual surface related to cursor display control.
  • the second virtual surface 202 is a reference virtual surface related to determination of a touch operation or the like.
  • These two virtual planes are set on the reference axis J0 at positions close to the user-side reference point where the user's fingers can reach.
  • the space 100 and the virtual plane do not have an entity as an object, and the display device 1 manages the information as calculation information.
  • the user freely moves his / her finger up / down / left / right / backward with respect to the virtual plane space, for example, moves the finger back and forth in the direction of the reference axis J0 with respect to the position of the GUI object or the like on the screen 10 to perform touch operation or tap operation. I do.
  • the binocular parallax calculation unit 23 detects the position and movement of the finger in the virtual plane space.
  • the binocular parallax calculation unit 23 measures the distance D1 between the point Q0 and the face point P0 by the distance measurement process based on the binocular parallax in the analysis of the camera-captured image, and calculates the point Q0 and the finger point F0.
  • the distance D2 is measured.
  • the binocular parallax calculation unit 23 detects the position of the point P0 as position coordinates (Xp, Yp, Zp) for each time point, and detects the position of the point F0 as position coordinates (Xf, Yf, Zf) for each time point. Also, the binocular parallax calculation unit 23 grasps the position coordinates (xf, yf) in the screen 10 associated with the position of the finger point F0 in the virtual plane space.
  • FIG. 4 also shows a virtual surface space, examples of finger positions, various distances and lengths, and the like, while the space 100 is viewed from the side.
  • FIG. 4 shows a state in which the second virtual surface 202 is touched at the point F0 at the tip of the finger.
  • the position in the Z direction is 0 with respect to the position Z0 of the screen 10
  • the position of the point P0 is Zp
  • the position of the first virtual surface 201 is Z1
  • the position of the second virtual surface 202 is Z2
  • the point F0 Is indicated by Zf is indicated by Zf.
  • the finger point F0 enters the space 100 from the outside of the space 100 according to the user's movement. For example, the point F0 first enters the first space 101, and then enters the second space 102 from the first space 101 through the first virtual plane 201. Next, the point F0 enters the third space 103 from the second space 102 through the second virtual plane 202.
  • the approach of a finger includes not only the Z direction but also free movement up and down, left and right, and back and forth, including the X direction and the Y direction.
  • points F1 to F5 are shown as time-series trajectories of finger positions. Initially, the point F0 is outside the space 100.
  • the point moves from a point outside the space 100 to a point F1 in the first space 101.
  • the point F1 moves to a point F2 in the second space 102.
  • the point F2 moves to a point F3 in the third space 103.
  • the point F3 moves to a point F4 in the second space 102.
  • the point F4 moves to a point F5 outside the space 100.
  • the point F0 of the current finger position is at the point F2.
  • the distance from the point Q0 to the point F2 is shown as the distance D2 at that time.
  • a length L1 and a length L2 are shown as predetermined lengths forward from the point P0.
  • the position of the length L1 from the point P0 in the Z direction is indicated by a position Z1, and the corresponding point is indicated by a point C1.
  • the position of the length L2 from the point P0 in the Z direction is indicated by the position Z2, and the corresponding point is indicated by the point C2.
  • the center of the first virtual surface 201 is arranged at the point C1 at the position Z1.
  • the center of the second virtual plane 202 is arranged at the point C2 at the position Z2.
  • the distance DST indicates the distance from the point F0 of the fingertip to the second virtual plane 202 in the direction of the reference axis J0, corresponds to the difference between the position Zf and the position Z2, and has a positive / negative sign. If the point F0 is at a position before the second virtual plane 202, the distance DST is positive.
  • the binocular parallax calculation unit 23 calculates the distance DST.
  • the distance DST is a value representing the positional relationship of the fingers with respect to the virtual plane space and the degree of entry.
  • the thickness M indicates the thickness that is the distance between the first virtual surface 201 and the second virtual surface 202 in the virtual surface space in the direction of the reference axis J0, and is about 5 cm as an example.
  • the thickness M can also be set according to the predetermined lengths L1 and L2.
  • FIG. 5 shows a state in which the space 100 and the like are viewed from the top and the vertical direction, that is, the XZ plane viewed from the Y direction.
  • the point P0 which is the user side reference point, is generally set at one end near the screen 10 on the circumference of the user's head.
  • the half length of the width of the screen 10 in the X direction is indicated by H1.
  • points F11, F12, F13, and F14 are shown as examples of the position of the finger point F0.
  • the locus moves from a point F11 in the first space 101 to a point F12 in the second space 102.
  • the user can freely move his / her finger up and down, left and right, that is, in the X direction and the Y direction in the second space 102 in accordance with the position of the GUI object or the like on the screen 10. For example, it can move from point F12 to point F14.
  • the user performs an operation of selecting a desired object on the GUI of the screen 10
  • the user moves his / her finger to a position on the object.
  • an object for example, an option button
  • an area 311 illustrated in the X direction An area 312 on the second virtual plane 202 associated with the area 311 of the screen 10 is also shown.
  • the user performs a touch operation to select the object.
  • the user moves a finger to a position on the region 312, for example, the point F ⁇ b> 12 in the virtual plane space.
  • the user moves his / her finger so as to press the region 312.
  • This movement is determined and detected as a touch operation, and is output as operation input information 301.
  • the GUI display unit 14 performs an object selection determination process in the region 311 based on the operation input information 301 of the touch operation.
  • the movement is determined as a tap operation.
  • the movement is determined as a double tap operation.
  • the GUI can be similarly controlled using the operation input information 301 of the tap operation.
  • FIG. 6 similarly shows a case where the viewing position of the user has an angle with respect to the center of the screen 10 with the space 100 viewed from above.
  • the position of the user's eyes or the like is not necessarily limited to just facing the screen 10, and may have an angle with respect to the screen 10 in this way.
  • a virtual plane is basically set in the same manner, and remote operation can be realized.
  • the reference axis J0 with respect to the point P0 which is the user side reference point is shifted by an angle ⁇ with respect to the straight line 320 perpendicular to the center point Q0 of the screen 10. If this deviation is within a certain range of angle ⁇ , distance measurement based on binocular parallax can be performed, and remote operation can be similarly realized.
  • a point P0 that is a user-side reference point is detected based on the camera-captured image, and a distance D1 between the point Q0 and the point P0 is measured.
  • a straight line connecting the point P0 and the point Q0 is taken as a reference axis J0.
  • the first imaginary plane 201 is set at a point C1 that is located at a length L1 from the point P0.
  • the second virtual surface 202 is set at a point C2 at a position of length L2 from the point P0.
  • the first virtual surface 201 and the second virtual surface 202 are set as planes that are perpendicular to the reference axis J0 and non-parallel to the screen 10.
  • FIG. 6 shows a case where the viewing direction has a deviation in the left and right directions in the X direction, but a virtual plane can be similarly set when the viewing direction has a deviation in the upper and lower directions in the Y direction.
  • a virtual plane parallel to the screen 10 may be set at the positions of the points C1 and C2.
  • FIG. 7 shows how the screen 10 and the virtual plane overlap in the field of view when the screen 10 is viewed from the user's viewpoint.
  • the first virtual surface 201 and the second virtual surface 202 which are virtual surfaces are exactly the same one-to-one with respect to the rectangle of the screen 10 in the X direction and the Y direction.
  • the position of the content or GUI object displayed on the screen 10 and the position of the finger on the virtual plane are the same one-to-one correspondence.
  • the finger point F0 viewed from the user corresponds to the position when the object or the like on the screen 10 is inserted as it is, and the position of the cursor displayed on the GUI is also the same position.
  • the position coordinates in the screen 10 associated with the position of the point F0 are indicated by (xf, yf).
  • FIG. 8 is an explanatory diagram for supplementing distance measurement based on binocular parallax by two cameras, and shows an XZ plane when the space 100 and the like are viewed from above.
  • the user sees the point Q0 at the center of the screen 10 from the front, and the reference axis J0 is in the Z direction and along the normal line of the screen 10.
  • FIG. 8 schematically shows an example of a photographed image of the right camera 21 and a photographed image of the left camera 22.
  • the subject of the two cameras is a user A.
  • the captured image includes user A's head, face, upper body, and the like.
  • the display system detects the distance and position of the object by applying the principle of distance measurement based on the known binocular parallax.
  • the principle of distance measurement based on binocular parallax when two cameras are regarded as both eyes, the distance of the depth from the subject can be measured based on the binocular parallax of the captured image.
  • the camera side reference point Q0, the point Q5 of the camera 21, the position coordinates of the point Q6 of the camera 22, the length H1, etc. are fixed and known values.
  • a point P0 that is a user-side reference point is set as an intermediate point between both eyes.
  • the convergence angle ⁇ 1 at the point P0 is an angle according to the distance D1
  • the convergence angle ⁇ 2 at the point F0 is an angle according to the distance D2.
  • the display device 1 measures the distance D1 from the point Q0 to the point P0 and the distance D2 from the point Q0 to the point F0 by calculating the distance measurement based on the binocular parallax in the analysis of the captured images of the two cameras. .
  • the angles ⁇ 1 and ⁇ 2 are obtained by calculation from the deviation of the left and right captured images (FIG. 9).
  • the directions of the two cameras may be the same in parallel. It is only necessary that the user is in the shooting range.
  • the difference between the image 801 taken from the right camera 21 and the image 802 taken from the left camera 22 according to the angles ⁇ 1 and ⁇ 2 occurs in the image content.
  • the binocular parallax calculation unit 23 counts the position of an object such as an eye in the image content in units of pixels constituting the images 801 and 802 (referred to as pixel positions) as shown in FIG.
  • the binocular parallax calculation unit 23 calculates the difference width of the pixel position of the object in the image content as the difference ⁇ .
  • the distances D1 and D2 can be calculated using the difference ⁇ and the focal lengths (FD1, FD2) of the camera.
  • the difference ⁇ 1 between the pixel position of the right eye point E1 of the right image 801 and the pixel position of the right eye point E1 of the left image 802 is calculated.
  • the difference ⁇ 1 is about 2 cm, for example.
  • the head diameter 811 is about 19 cm, and the head circumference 812 is about 60 cm.
  • D1 9.5 m.
  • the distance D2 can be similarly measured.
  • the length H1 is 10 m.
  • the number of pixels of the camera is set to 12 million pixels.
  • the size of a room or a venue is 30 m ⁇ 40 m. Even in such a case, the camera can be covered with a resolution of 1 cm, which satisfies the specifications necessary for practical use.
  • the above-mentioned calculation may be performed in the same way at the time of remote operation control, but the calculation may be simplified using a lookup table or the like.
  • experimental data is taken, an output value is obtained by calculation from the input value, and the correspondence between the input value and the output value is stored in a lookup table.
  • FIG. 9 shows the setting of the point P0, which is the user side reference point, based on the analysis of the camera photographed image.
  • the right eye point E ⁇ b> 1 and the left eye point E ⁇ b> 2 are included in the face portions of the user shown in the images 801 and 802 of the two cameras.
  • a distance and a position may be calculated for an intermediate point between both eyes in the captured image, and the point may be set as the point P0.
  • an intermediate point between these points may be set as the point P0.
  • the binocular parallax calculation unit 23 of the display device 1 includes, for example, the pixel position (x1, y1) of the right eye point E1 in the image 801 of the right camera 21 and the right eye in the image 802 of the left camera 22.
  • the pixel position (x2, y2) of the point E1 is calculated.
  • a difference ⁇ 1 (in other words, a pixel distance) between the pixel positions of the right eye is calculated.
  • the display device 1 calculates the difference ⁇ 2 with respect to the left eye point.
  • the display device 1 calculates the angles ⁇ 1 and ⁇ 2 from the magnitudes of the pixel position differences ⁇ 1 and ⁇ 2.
  • FIG. 10 shows a GUI display control example of the screen 10 according to the position of the finger.
  • the upper side of FIG. 10 shows an example of the position of the finger in a state where the space 100 and the like are viewed from the side, as in FIG.
  • the lower side of FIG. 10 shows a screen G1 and the like as a screen example according to the position of the finger.
  • Screen G1 etc. show the example of a display of the screen 10 seen from the user.
  • a menu screen is shown, and two selection buttons are provided as GUI objects.
  • the user selects an option by touching the button.
  • the GUI display unit 14 performs a selection determination process as a corresponding process according to the button touch operation.
  • This touch operation is an operation of touching the second virtual surface 202. Specifically, this touch operation corresponds to a movement in which the finger point F0 comes into contact with the second virtual surface 202 from the second space 102 and slightly enters the third space 103.
  • the screen 10 is assumed to be in a state in which content or the like is displayed with the display function on.
  • the user moves the fingertip into the space 100, for example, the first space 101.
  • the display device 1 displays a screen G1, which is a first screen example, when the position of the finger point F0 is the position of the point F1 in the first space 101 from the outside of the space 100, for example.
  • a menu screen is displayed on the screen G1.
  • GUI objects for giving operation instructions to the OS and applications of the display device 1 are arranged.
  • the GUI objects include various known objects such as option buttons, list boxes, and text input forms.
  • the screen G1 has two option buttons 401 and 402 for selecting from “option A” or “option B” as objects.
  • the display device 1 does not display the cursor yet in this state.
  • the user moves the fingertip from the first space 101 to the second space 102 in order to give an operation instruction.
  • the menu screen is not displayed.
  • the display device 1 displays a cursor K1 for remote operation control on the screen G2 in accordance with the position of the finger in the virtual plane space.
  • the cursor K1 is displayed at a position in the screen 10 associated with the position of the finger point F0 in the virtual plane space.
  • the display state of the cursor K1 is controlled according to the movement of fingers in the second space 102. The user freely moves his / her finger up / down / left / right / front / back in the second space 102.
  • the user when the user wants to select “option B” of the option button 402, the user moves his / her finger so as to move the cursor K1 onto the option button 402 in order to reflect the intention.
  • the display device 1 displays the cursor K1 at a position above the object.
  • the display device 1 controls the cursor K1 to be displayed in a predetermined color, size, or shape according to the distance DST in the direction of the reference axis J0. For example, on the second screen G2, the display device 1 displays the cursor K1 in an arrow shape and yellow, and displays a smaller size as the distance DST decreases.
  • the cursor K1 has a small size, it becomes easier to see and operate the GUI object.
  • the user moves his / her finger further back so as to press the option button 402.
  • the cursor K1 is not displayed.
  • the menu screen and the cursor K1 are not displayed.
  • the point F3 Changes to the state of the screen G3.
  • the display device 1 keeps the cursor K1 displayed in the state of the screen G3 in which the position of the finger is in the third space 103.
  • the size of the cursor K1 may be minimized.
  • the display device 1 determines and detects the movement of the fingertip reaching and passing through the second virtual surface 202 as a touch operation in the remote operation.
  • the display device 1 determines and detects a movement from the point F2 to the point F3 as a touch operation on the option button 402.
  • the display device 1 gives operation input information 301 representing the detected touch operation to the GUI display unit 14.
  • the GUI display unit 14 performs a selection decision process of “option B” as a process corresponding to the option button 402, and updates the display state of the object and the cursor K1 on the screen G3.
  • the selection decision effect display of the option button 402 is performed, and the display is made such that the option button 402 is pushed inward.
  • the color of the cursor K1 changes from yellow to red.
  • Display control in the first space 101 and the second space 102 is the same as described above.
  • the display device 1 hides the cursor K1 when the position of the finger enters the third space 103.
  • the display device 1 switches the cursor K1 to non-display.
  • the display device 1 displays the cursor K1 again when the position of the finger returns to the second space 102.
  • Other screen display control examples may be as follows. Initially, when the position of the finger is outside the space 100, nothing is displayed on the screen 10 due to an off state of the display function of the display device 1 (main power is off). The user moves his / her finger into the space 100, for example, the first space 101. The display device 1 detects the movement of the finger entering the first space 101 based on the analysis of the video captured by the camera. The display device 1 displays the menu screen of the screen G1 for the first time triggered by the detection. The same shall apply thereafter.
  • FIG. 11 shows a display control example such as a GUI cursor on the screen 10.
  • A1 shows a screen example in a state in which the position of the finger is relatively far from the second virtual surface 202 in the virtual surface space.
  • icons 411 and the like are displayed as GUI objects.
  • the display device 1 displays the arrow-shaped cursor 410 with the size being changed according to the distance DST.
  • the cursor 410 is displayed in a relatively large size. In this state, since the cursor 410 is still large, it is difficult to select an object such as the icon 411.
  • (A2) shows a screen example in a state where the finger approaches the second virtual surface 202 from the state of (A1) and the position of the finger is relatively close to the second virtual surface 202.
  • the cursor 410 is displayed in a relatively small size.
  • the icons 411 and the like are easy to select. The user can sense the degree of approach to the second virtual surface 202 from the size of the cursor 410 and the like.
  • (B1) similarly shows another screen example in a state in which the position of the finger is relatively far from the second virtual surface 202.
  • a double circular cursor 420 is displayed.
  • the circle inside the double circle is constant and corresponds to the position of the finger point F0.
  • the radius of the outer circle of the double circle is changed depending on the distance DST.
  • the cursor 420 is large.
  • (B2) shows an example of a screen in a state in which a finger approaches the second virtual surface 202 from the state of (B1). In the state (B2), the radius of the outer circle is relatively small.
  • the radius of the outer circle may be the same as the radius of the inner circle.
  • (C1) similarly shows another screen example in a state in which the position of the finger is relatively far from the second virtual surface 202.
  • a hand-shaped cursor 430 is displayed.
  • an image having a shape in which the palm is widened is used as the first type hand-shaped cursor 430.
  • (C2) shows an example of a screen in a state in which a finger approaches the second virtual surface 202 from the state of (C1).
  • the type is changed together with the size according to the distance DST, and in particular, the second type is an image of the shape of one finger. In this way, the cursor type or the like may be changed according to the distance DST.
  • the cursor may be a transparent display where the background can be seen.
  • FIG. 12 shows display control examples such as a GUI cursor on the screen 10 and various operation examples.
  • An example of displaying in more detail in what state the finger is in contact with the second virtual surface 202 when feedback of the degree of entry of the finger with respect to the virtual surface by cursor display control is shown.
  • FIG. 10 shows a state in which the tip of one finger is in contact with the second virtual surface 202 and a screen example in that state.
  • an arrow-shaped cursor 410 is displayed at a position on the GUI corresponding to the point F0 of the position of the tip of one finger on the virtual plane, as in (A2) of FIG.
  • the cursor 410 may be displayed in a unique state. For example, an effect indicating contact may be displayed.
  • An example of GUI operation on this screen 10 is shown below. Examples of operations include touch operations and tap operations. The user performs, for example, a touch operation on an object such as a GUI button in a state where the cursor 410 overlaps.
  • the user moves one finger to the back on the reference axis J0 to bring the fingertip into contact with the second virtual surface 202.
  • This movement is determined as a touch operation.
  • Corresponding processing is performed by a touch operation on the second virtual surface 202.
  • a tap operation or the like is possible.
  • FIG. (B) shows a state in which the palm is in contact with the second virtual surface 202 and a screen example in that state.
  • a cursor 440 having a finger cross-sectional shape is displayed at a position on the GUI in correspondence with the position and cross section of the palm on the virtual plane.
  • An example of GUI operation on this screen 10 is shown below.
  • Examples of operations include a swipe operation.
  • the user performs, for example, a swipe operation on the page or object on the screen 10. That is, the user moves the palm back on the reference axis J0 to be in contact with the second virtual surface 202, and moves the palm so as to slide in a desired direction in the X direction or the Y direction. This movement is determined as a swipe operation. Similarly, a flick operation or the like is possible.
  • the GUI display unit 14 performs page scrolling and object movement processing as corresponding processing in accordance with these operations.
  • the finger cross-sectional shape of the cursor 440 may be a schematic display. Further, as an application, the size of the finger cross section may be used for operation control. For example, it is possible to determine that the touch operation is performed when the amount of change in the area of the finger cross section is equal to or greater than a threshold. For example, when one finger first comes into contact with the second virtual surface 202, it is a small cross section of one finger, and a large cross section corresponding to the entire hand by entering further into the back.
  • FIG. 10 shows a state in which a plurality of fingers are in contact with the second virtual surface 202 and a screen example in that state.
  • a finger cross-sectional cursor 450 is displayed at a position on the GUI in correspondence with the positions and cross sections of a plurality of fingers on the virtual plane. For example, the three fingers are in contact with the second virtual surface 202.
  • An example of GUI operation on this screen 10 is shown below. Examples of operations include a pinch operation. The user performs, for example, a pinch-in or pinch-out operation on the image on the screen 10 or the like.
  • the user makes a state in which two fingers are in contact with the second virtual surface 202, and moves the two fingers so as to open and close in a desired direction in the X direction or the Y direction. This movement is determined as a pinch operation.
  • the GUI display unit 14 performs image enlargement / reduction processing in accordance with these operations.
  • the number of fingers in the finger section may be used for operation control. For example, when it can be determined that there are two fingers from the finger cross-section, it can be determined as a pinch operation.
  • FIG. 13 shows a detailed example of the shape of the space 100 as a supplement to the first embodiment.
  • the space 100 used for the remote operation control may be a substantially quadrangular pyramid-shaped space 100 having points B1 and E2 of both eyes as two vertices as shown in FIG.
  • the virtual plane space is set so as to be within the space 100.
  • Both the space 100 in FIG. 1 and the space 100 in FIG. 13 can be used with almost no problem in terms of accuracy.
  • the space 100 is not limited to the above shape, and may be, for example, a cone having a rectangle near the face or head and having the rectangle as four vertices.
  • FIG. 14 is an explanatory diagram for explaining details of determination of a predetermined operation, and shows an XZ plane in a state where the space 100 is viewed from above. As examples of the finger position point F0, points f0 to f5 are shown.
  • an operation control surface 221 is provided at a position Za at a predetermined distance 321 from the position Z2 of the second virtual surface 202 in the Z direction.
  • a surface 222 for operation control is provided at a position Zb at a predetermined distance 322 behind the position Z2 of the second virtual surface 202.
  • the virtual surface operation determination unit 24 may use only the second virtual surface 202 or the second virtual surface 202 and the surface 221 when determining the touch operation.
  • the virtual surface operation determination unit 24 may use the second virtual surface 202 and the surface 222 when determining a touch operation or a tap operation.
  • the plane 221 at the position Za may be considered as the third virtual plane
  • the plane 222 at the position Zb may be considered as the fourth virtual plane.
  • the virtual surface space may be considered to be composed of a plurality of virtual surface layers.
  • the virtual surface space is a space that enables an operation of moving a finger back and forth with respect to an invisible virtual surface in the space 100 as a difference from an existing touch panel.
  • the display device 1 feeds back the remote operation state to the user by controlling the display of the cursor or the like on the screen 10 in real time according to the movement.
  • a predetermined operation that can be performed as a remote operation on the virtual surface of the GUI of the screen 10 in the display system will be described below.
  • Various predetermined operations can be controlled peculiar to the positional relationship with the virtual plane and the degree of entry.
  • various operations are determined according to the distance DST between the point F0 of the finger position and the position of each virtual surface in the virtual surface space.
  • the distance DST the distance in a state where the position of the finger point F0 is at the point f2 in the second space 102 is indicated by a distance DST.
  • This touch operation is generally an operation of bringing a finger into contact with the second virtual surface 202. Specifically, this touch operation is performed by maintaining the finger point F0 at a position within the range from the position Z2 to the surface 221 of the position Za, or at a position within the range from the position Z2 to the surface 222 of the position Zb. It is an operation to maintain.
  • the position of the finger point F0 is at the point f3 in contact with the second virtual surface 202 from the point f2 in the second space 102, or the surface 221 of the distance 321 in the third space 103 is displayed. If it is within the range up to, it is determined as follows.
  • the movement from the point f2 to the point f3 can be associated as a touch operation on the object when the positions in the X direction and the Y direction are positions on the GUI object.
  • the negative value of the distance DST is 0 or more and 321 or less. Whether to use this condition can be set according to the type of object.
  • the display device 1 does not associate with the touch operation or the like when the position of the finger in the X direction and the Y direction is a region such as a background region in the GUI, in particular, a region without an object.
  • the cursor display state is maintained, for example, the same state as the cursor display state when the second virtual surface 202 is touched.
  • the position of the finger point F0 enters the back from the second virtual plane 202, for example, within the range from the plane 221 to the plane 222, for example, when the position of the point f4 is reached, Judge as follows.
  • the movement from the point f2 to the point f4 can be associated as a touch operation on the object when the positions in the X direction and the Y direction are positions on the GUI object.
  • the negative value of the distance DST is larger than the distance 321 and smaller than or equal to the distance 322.
  • it can be associated as a touch operation of the object according to the type of the object. For example, in the case of the option button in FIG. 10, the touch operation is determined using any of the above conditions.
  • the object is a volume adjustment button
  • the user moves the finger back and forth within the range between the position Za and the position Zb to determine that the operation is a touch operation and adjust the volume according to the distance DST. Can be associated.
  • the display device 1 may count time when determining the touch operation.
  • the display device 1 may determine that the operation is a long press operation when a state where the position of the finger point F0 is within the range of the position Z2 and the position Za continues for a predetermined time, for example.
  • Cursor control examples are as follows.
  • the surface 222 at the innermost position Zb is treated as the fourth virtual surface.
  • the display device 1 displays the cursor on the screen 10, and when it exceeds the surface 222, the display device 1 hides the cursor and invalidates the remote operation. .
  • the finger returns to the front of the surface 222, the cursor is returned to the display state.
  • this display system can be implemented as the following specific tap operations.
  • This tap operation is generally an operation of pushing a finger so as to strike the second virtual surface 202 and immediately returning it.
  • the point f3 passing through the second virtual plane 202 and within the range of the distance 321 (or the point f4 within the range of the distance 322).
  • the movement can be associated as a tap operation.
  • the double tap operation is generally an operation in which the tap operation on the second virtual surface 202 is repeated twice within a short time.
  • the conventional touch panel swipe operation and mouse drag operation can be realized as the following specific swipe operations in this display system.
  • the swipe operation is performed in a desired direction in the X direction and the Y direction while touching the point F0 of the finger within the distance 321 (or within the distance 322) of the second virtual plane 202.
  • This is a sliding operation.
  • the user pushes the finger at the position on the object on the screen 10 in the second virtual plane 202 and beyond, moves it to a desired position in the X direction and the Y direction, and then moves the finger to the first position in front. 2 Return to a position in the space 102.
  • the display device 1 determines and detects the swipe operation, and the GUI display unit 14 moves the object in accordance with the swipe operation. The same applies to page scrolling and the like.
  • the conventional touch panel flick operation can be realized as follows as a specific flick operation in this display system.
  • this flick operation is an operation in which a finger is pushed to a position behind the second virtual plane 202 and then returned to a position in the second space 102 while quickly moving in a desired direction in the X direction and the Y direction. It is.
  • the GUI display unit 14 determines the amount of movement of the page or the like according to the speed of finger movement and the amount of change during the flick operation, and transitions the page display state.
  • the conventional pinch operation of the touch panel can be realized as follows as a specific pinch operation in this display system.
  • this pinch operation is performed in a desired direction in the X direction and the Y direction while keeping two points of fingers at a position within the range of the distance 321 (or distance 322) behind the second virtual plane 202.
  • This is an operation to open and close the two points.
  • the user for example, opens two fingers to pinch out or closes two fingers and pinches in with two points entering the range of the distance 321 of the second virtual plane 202 at the same time. Is returned to the second space 102.
  • the GUI display unit 14 can associate the pinch-out and pinch-in operations with the enlargement / reduction processing of the image on the screen 10.
  • FIG. 15 shows the virtual surface adjustment for each user in the first embodiment.
  • the display device 1 has a function of setting a virtual plane space suitable for the individual user as one of the functions. In this function, it is possible to adjust the position of each virtual surface to shift back and forth on the reference axis J0 based on the standard virtual surface space, and the thickness M of the virtual surface space can also be adjusted. With the shift of the position of the virtual surface, the size of the virtual surface is also adjusted to a size that fits in the quadrangular pyramid shape of the space 100. As described above, the individual user can be identified using the individual recognition unit 26.
  • FIG. 15 shows the standard position etc. in a standard virtual surface first.
  • This virtual surface information is preset as a default value.
  • As standard positions lengths L1 and L2 having a predetermined length and a thickness M are shown.
  • the virtual plane has a standard size on the XY plane and has a size that matches the ratio of the screen 10. If the user is a new user and the virtual surface information is not registered, a standard virtual surface is set using default values.
  • FIG. 15 shows an example of the state of the virtual surface after adjustment with respect to (A).
  • the position is adjusted so as to be closer to the position in the Z direction of the standard virtual surface for adults.
  • the size of the virtual plane is also reduced to match the space 100.
  • the virtual plane can be adjusted for each individual user using the virtual plane adjustment unit 28 as the user setting.
  • the user can register a face image or finger image for personal recognition, a user ID, or the like in advance, and can set the position and size of the virtual plane suitable for him.
  • the position Z2 of the second virtual surface 202 in (A) is shifted to the position Z2s so as to approach the point P0 by a distance of 331 on the reference axis J0 in (B).
  • the center point C2 is the center point C2s.
  • the predetermined length L2 is changed to the length L2s.
  • the position Z1 of the first virtual surface 201 is shifted to the position Z1s on the reference axis J0, and the center point C1 of the first virtual surface 201 is the center point C1s.
  • the predetermined length L1 is changed to the length L1s.
  • the thickness M of the virtual surface space is changed to the thickness Ms.
  • a second virtual surface 202s is set at a size that fits in the space 100 perpendicular to the center point C2s.
  • a first virtual surface 201s is set at a size that fits in the space 100 perpendicularly to the center point C1s.
  • the distance 331 is an offset distance for adjustment and can be set by the user.
  • the adjusted set value may be obtained by applying a calculation for multiplying the default value by the adjustment coefficient. Adjustment for shifting the virtual plane to the position on the back on the reference axis J0 is also possible.
  • the display device 1 determines a rough classification such as male / female or adult / child regarding the target user based on the analysis of the camera-captured image.
  • the display device 1 automatically applies a virtual surface having a position and size set for each classification as setting information in advance according to the classification.
  • the position on the reference axis J0 can be selected and adjusted according to the length of the arm of the individual user. For example, in general, adults have longer arms than children, and men have longer arms than women. It is preferable that the virtual plane space is set at a suitable position for each individual user of the family.
  • This display system can set a suitable virtual plane for each individual user using the adjustment function. The distance between the point P0 and the virtual plane is set to an appropriate length. Thereby, the usability for each user is improved. In this display system, for example, even when an adult and a child have the same standard virtual plane space, the usability is slightly different due to the difference in arm length and the like, but no major problem occurs.
  • the standard virtual surface may be set using a statistical value such as an average value of the user group.
  • the user's remote operation on the screen 10 of the display device 1 can be controlled, and the required learning amount is small, and the user's usability can be improved.
  • Embodiment 1 there is no need for the user to distinguish and memorize a plurality of gestures for the use of remote operation, the learning amount is minimal, and remote operation is possible by simple finger movement. Yes, it is easy to use.
  • the user can easily perform an operation input to the screen 10 of the television receiver or the like as a remote operation in the state of being empty without using a remote control device.
  • the user only needs to perform an operation according to the conventional mouse or touch panel operation on the virtual surface space, which is convenient.
  • a virtual plane is configured at a suitable position in front of the user's line of sight, and feedback by cursor display is performed according to the position of the finger.
  • the user can intuitively and smoothly perform remote operation on a space where there is no object.
  • the ease of remote operation in the depth direction between the user's viewpoint and the screen is not sufficiently considered.
  • a unique touch operation or the like in the depth direction is realized, and the ease of remote operation or the like is sufficiently considered.
  • the processing load on the computer can be relatively reduced, which is advantageous in implementation.
  • a virtual surface space is set, and a process of detecting a predetermined operation is performed by monitoring the degree of finger entry in the virtual surface space. This processing is less computationally intensive than the processing for distinguishing and detecting a plurality of gestures in the prior art example.
  • two cameras may be integrated as one camera having an advanced function of measuring the distance of an object.
  • the processor of the display device 1 performs processing using data from the camera.
  • FIG. 16 shows the arrangement of two cameras, the space 100, and the like in the display system of the first modification of the first embodiment.
  • FIG. 16 illustrates a case where the user is viewing the screen 10 in a posture lying on the floor and operates the virtual surface with the right hand as an example of the posture when viewing the user.
  • the user's posture can change, but remote control is possible in each posture.
  • the cameras 21 and 22 are arranged at the position of the upper right point Q1 and the upper left point Q2 with respect to the screen 10.
  • the camera-side reference point is basically the middle point between the two cameras, that is, the middle point Q7 on the upper side of the screen 10.
  • the virtual plane can be set and controlled as in the first embodiment.
  • the virtual surface of the space 100 is basically as shown in FIG. 17A, and is set as shown in FIG. 17B by correction.
  • FIG. 17 shows the correction of the virtual surface in the first modification.
  • FIG. 17A shows a state before correction.
  • the reference axis J0 used for setting the virtual plane is a straight line connecting the point Q7 and the point P0.
  • the first virtual plane 201a is set as a vertical plane with the point C1a at the position of the predetermined length L1 from the point P0 toward the point Q7.
  • the second virtual surface 202a is set at the point C2a at the position of the length L2.
  • the size of the second virtual surface 202a is a size with a predetermined ratio corresponding to the size of the screen 10.
  • first virtual surface 201a and the second virtual surface 202a before correction may be used as they are for control.
  • the viewing position of the user is sufficiently away from the screen 10, there is almost no problem in terms of accuracy even if such a virtual surface is used.
  • FIG. 17 shows a virtual surface after correction from (A).
  • the first virtual surface 201a and the second virtual surface 202a are corrected to obtain the corrected first virtual surface 201b and second virtual surface 202b.
  • Several correction methods are possible, but the following is required when angular rotation is used.
  • the reference axis J0 and the virtual plane are rotated at an angle ⁇ so as to be aligned with the center point Q0 of the screen 10.
  • the angle ⁇ is obtained by calculation using the position of the point P0, the length V1, and the like.
  • the reference axis J0 after rotation is a straight line connecting the point Q0 and the point P0.
  • the rotated virtual surfaces are the first virtual surface 201b and the second virtual surface 202b, and the center points thereof are points C1b and C2b.
  • the other correction methods may be as follows.
  • the position and distance D1 of the point P0 from the center point Q0 of the screen 10 are obtained by calculation using the position and distance D3 of the point P0 from the point Q7 which is the camera side reference point and the length V1.
  • the lengths L1 and L2 on the reference axis J0 from the point Q0 to the point P0 similarly, the first virtual surface 201b and the second virtual surface 202b, which are corrected virtual surfaces, are set. Also good.
  • the second virtual plane is set first between the point Q7 and the point P0, and the second virtual plane is corrected to a position on the reference axis J0 between the point Q0 and the point P0. To do. Then, the first virtual surface is set at a position of a predetermined thickness before the corrected second virtual surface.
  • the arrangement of the two cameras is not limited to the above form, and various arrangements are possible. What is necessary is just to arrange
  • FIG. For example, the positions of the lower right point Q3 and the lower left point Q4 may be used. Further, three or more cameras may be arranged to improve the accuracy of distance measurement based on binocular parallax.
  • FIG. 18 shows a second modification.
  • a single virtual surface 200 is set in the space 100.
  • the virtual plane 200 is set to a size that fits in the space 100 around the point C0 at the position Zc of a predetermined length L0 from the point P0.
  • the space 100 is divided into a first space 111 on the side closer to the user and a second space 112 on the back side by a single virtual plane 200. Operations such as finger touching and pushing on the virtual surface 200 are possible.
  • the screen GA shows a display control example of the screen 10 according to the finger position.
  • the point F0 of the finger position is in the first space 111, for example, at the point Fa, for example, the screen GA is displayed.
  • the screen GA has a GUI object on the menu screen or the like and displays a cursor K1.
  • the display device 1 changes the size of the cursor K1 and the like according to the distance DST that represents the positional relationship of the fingers with respect to the virtual surface 200.
  • the position changes as shown in the screen GB, for example. .
  • a movement from the point Fa to the point Fb or the point Fc is determined and detected as a touch operation.
  • Operation input information 301 representing the touch operation is given to the GUI display unit 14.
  • the GUI display unit 14 performs object correspondence processing in accordance with the touch operation, and updates the display state of the object and the cursor K1. Similarly, a tap operation or the like on the virtual surface 200 can be performed.
  • a touch operation or the like on the entire virtual surface 200 can be performed.
  • the virtual surface 200 is set based on camera shooting, and a touch operation on the virtual surface 200 is accepted.
  • the touch operation is associated as a predetermined operation instruction to the display device 1, for example, power on (switching the display function to an on state). In that case, feedback such as cursor display control may be omitted.
  • the entire XY plane of the virtual surface 200 may be divided into a plurality of regions. For example, it may be divided into left and right regions. And the touch operation etc. with respect to each area
  • FIG. 19 shows a state where the screen 10 and the virtual surface overlap in the user's field of view in the third modification.
  • the setting and adjustment of the virtual plane are limited to a part of the entire XY plane of the screen 10.
  • the first virtual plane 201 and the second virtual plane 202 which are virtual planes are set to overlap the right half area 191 in the entire XY plane of the screen 10.
  • remote operation such as touch operation is accepted as valid, and the cursor K1 is also displayed.
  • the left half area 192 remote operation is not accepted as invalid and the cursor K1 is not displayed.
  • a frame or the like indicating the area 191 may be displayed on the screen 10.
  • FIG. 20 shows a functional block configuration of the display system of the second embodiment.
  • the display system according to the second embodiment is a system in which the display device 1 and the remote control device 3 are connected.
  • the remote operation control device 3, which is an independent device different from the display device 1, has the function of the remote operation control unit 20 of the first embodiment.
  • the remote operation control device 3 controls a user's remote operation on the GUI or the like of the screen 10 of the display device 1.
  • the remote operation control device 3 generates operation input information 301 for remote operation and transmits it to the display device 1.
  • the display device 1 controls the GUI and the like of the screen 10 based on the operation input information 301 as in the first embodiment.
  • the display device 1 includes a communication unit 16 in addition to the control unit 11 and the like similar to the components in FIG.
  • the communication unit 16 receives the operation input information 301 from the communication unit 33 of the remote operation control device 3 and gives it to the GUI display unit 14 and the like.
  • the remote operation control device 3 includes similar components corresponding to the remote operation control unit 20 of FIG. 2 and includes a control unit 31, a storage unit 32, a communication unit 33, and the like.
  • the control unit 31 controls the entire remote operation control device 3.
  • the storage unit 32 stores control information and data.
  • the communication unit 33 performs communication processing with the communication unit 16 of the display device 1.
  • the communication unit 16 and the communication unit 33 are parts including a communication interface device corresponding to a predetermined communication interface.
  • the communication unit 33 transmits the operation input information 301 output from the operation input information output unit 25 to the communication unit 16 of the display device 1.
  • remote control can be realized as in the first embodiment. In the second embodiment, it is not necessary to mount a remote control function on the display device 1, and an existing display device can be used. Various display devices 1 can be connected to the remote operation control device 3 as necessary to constitute a display system.
  • FIGS. 3 A display device and the like according to the third embodiment of the present invention will be described with reference to FIGS.
  • the basic configuration of the third embodiment is the same as that of the first embodiment.
  • the components of the third embodiment that are different from the first embodiment will be described.
  • the display system according to the third embodiment has a function of saving power by controlling operations such as power on / off of the display device 1 in connection with remote operation control.
  • FIG. 21 is a perspective view showing a configuration of a display system including the display device 1 according to the third embodiment.
  • the display device 1 includes a control board 600, a human sensor 601 and a main power supply unit 602 in a housing, which are connected by a communication line or a power line.
  • FIG. 21 shows a case where two cameras are built in the housing of the display device 1, and the lens portion of the camera is exposed to the outside. 2 is mounted on the control board 600 as an electronic circuit.
  • the main power supply unit 602 supplies power to each unit.
  • the human sensor 601 is basically always turned on.
  • the two cameras and the display function portion of the display device 1 are normally in a power-off state, in other words, in a standby state.
  • the human sensor 601 detects the presence of a person in a predetermined range such as around the display device 1 using infrared rays or the like.
  • a detection signal is sent to the control board 600.
  • the display device 1 turns on the power of the two cameras and turns on the display function according to the detection signal.
  • the camera starts shooting, and the display device 1 starts processing using the camera shot image.
  • the display device 1 performs user personal recognition processing based on the analysis of the face image of the captured image.
  • the display device 1 may determine the brightness of the room based on a camera or another sensor.
  • the display device 1 determines that the room is dark and the brightness for the personal recognition process is insufficient, the display device 1 turns on the display of the screen 10, and the screen 10 displays a high-intensity image, for example, a high-intensity image. Control to display a menu screen with a background. This supports personal recognition processing.
  • the display device 1 sets a virtual plane in the space 100 based on user personal recognition, and enters a mode in which remote operation is possible.
  • the display device 1 may control the power-on of the display device 1 by a user's remote operation as follows.
  • the user holds his / her finger over the screen 10 at a position near the front of the screen 10.
  • the display device 1 detects the movement of the finger that has entered the space 100 based on the analysis of the camera-captured video.
  • the display device 1 interprets the movement as a power-on operation instruction.
  • power is supplied from the main power supply unit 602 to each unit constituting the display function, and the display function is turned on.
  • the screen 10 displays a content or a GUI menu screen.
  • the power-off state is continued if the user has not entered his / her finger into the space 100.
  • the user can continue actions other than viewing.
  • the main power supply unit 602 completely shifts the display system including the human sensor 601 to the power-off state. .
  • the human sensor 601 detects that a person has entered a predetermined range such as around the display device 1 or a room entrance.
  • a predetermined range such as around the display device 1 or a room entrance.
  • the camera captures a predetermined range and outputs a captured video signal.
  • the display device 1 detects whether there is a user within a predetermined range based on the analysis of the captured image.
  • the display device 1 determines that the user is highly likely to use the display function, and turns on the display function. That is, power supply from the main power supply unit 602 to each unit of the control board 600 is started.
  • the display device 1 determines a predetermined operation by the user using the remote control function and the camera-captured video.
  • the predetermined operation is a predetermined operation for turning on the display function.
  • this predetermined operation can be arbitrarily defined, it is, for example, an operation of bringing a finger into the virtual surface space of the space 100, a touch operation of the second virtual surface 202, or the like.
  • the display device 1 may detect a predetermined operation when a finger is in the virtual plane space for a predetermined time or longer.
  • FIG. 22 shows a functional block configuration including hardware such as an internal electronic circuit of the display device 1 according to the third embodiment.
  • the display device 1 includes a first antenna 500, a second antenna 501, a tuner circuit 502, a demodulation circuit 503, a video / audio data signal separation circuit 504, a data expansion circuit 505, a camera signal input circuit 510, an image memory 511, an MPU (microprocessor). Unit) 520, nonvolatile data memory 521, video input circuit 530, graphics circuit 540, liquid crystal driving circuit 550, switch 560, display panel screen 10, cameras 21, 22, human sensor 601, main power supply unit 602, power plug 603 etc.
  • An external PC 700 or the like can be connected to the display device 1.
  • the MPU 520 performs overall control processing of the display device 1.
  • the MPU 520 handles various processes such as personal recognition, face and finger distance measurement, virtual surface setting, operation determination on the virtual surface, GUI display control, and the like as shown in FIG.
  • the non-volatile data memory 521 stores control data and information.
  • the non-volatile data memory 521 stores registered face image data for individual user recognition, virtual surface information for each individual user, and the like.
  • the display device 1 also has a signal line and a power supply line that connect each part.
  • the power supply line 651 is a power supply line for supplying power from the main power supply unit 602 to an electronic circuit such as the MPU 520 in the control board 600 and the human sensor 601.
  • the power supply line 652 is a power supply line for supplying power from the main power supply unit 602 to the cameras 21 and 22. Control signals and the like are transmitted and received through the signal lines 610 to 613 and the like.
  • the display device 1 has a display function similar to that of a general television receiver as a basic function.
  • the first antenna 500 is a terrestrial digital broadcast television antenna.
  • the second antenna 501 is a satellite TV antenna.
  • the display device 1 detects a television signal received by the first antenna 500 and the second antenna 501 by the tuner circuit 502 and demodulates the signal by the demodulation circuit 503.
  • the demodulated signal is separated into video, audio, and data signals by a video / audio data signal separation circuit 504.
  • the data decompression circuit 505 performs the decompression process.
  • a video signal sent from the external PC 700 or the like is converted into an appropriate format by the video input circuit 530 and transmitted to the graphics circuit 540.
  • the camera signal input circuit 510 inputs an image signal of a captured video obtained from the cameras 21 and 22.
  • the image signal of the captured video data is converted into an image signal of a predetermined format that can be easily recognized and stored in the image memory 511 side by side.
  • An image signal of the captured video data is supplied from the image memory 511 to the MPU 520 through the signal line 610.
  • the MPU 520 performs the various processes described above using the image signal in the image memory 511.
  • the MPU 520 extracts feature data of a human face, for example, a pattern such as an outline, eyes, nose, and mouth from an image signal, for example. Further, the MPU 520 extracts feature data of arms and fingers from the image signal.
  • the MPU 520 performs personal recognition processing, distance measurement processing based on binocular parallax, and the like using the extracted feature data.
  • the MPU 520 compares the extracted facial feature data with the registered facial feature data stored in the nonvolatile data memory 521 to identify the individual user.
  • the MPU 520 determines that the subject person corresponds to the individual user of the registered face feature data when there is registered face feature data having a certain degree of similarity or more with the face feature data of the subject person.
  • the MPU 520 reads virtual surface information and the like corresponding to the identified individual from the nonvolatile data memory 521.
  • the virtual surface information is information for setting a virtual surface space for each individual user in the space 100 described above, and includes setting values related to the lengths L1, L2, and the like.
  • the MPU 520 reads standard virtual surface information from the nonvolatile data memory 521 when the individual user cannot be identified by the personal recognition process. This information includes default values for lengths L1, L2, etc.
  • the human sensor 601 detects the presence or absence of a person in a predetermined range around the display device 1.
  • the human sensor 601 provides a detection signal indicating human detection to the MPU 520 through the signal line 612 when a person enters the predetermined range.
  • the MPU 520 receives a detection signal representing human detection from the human sensor 601
  • the MPU 520 gives a control signal to the switch 560 through the signal line 613, and switches the switch 560 from the off state to the on state.
  • power is supplied from the main power supply unit 602 to the two cameras through the power supply lines 651 and 652, and the two cameras are turned on.
  • the MPU 520 selects and reads out the high-intensity image signal from the nonvolatile data memory 521 when the brightness of the room is less than the brightness necessary for personal recognition based on the image signal from the image memory 511.
  • the MPU 520 sends the high luminance image signal to the graphics circuit 540 through the signal line 611.
  • the graphics circuit 540 controls the liquid crystal driving circuit 550 based on the high luminance image signal, and a high luminance image is displayed on the screen 10 by driving from the liquid crystal driving circuit 550.
  • the high luminance image signal may be data of a menu screen having a high luminance background.
  • the high-intensity image signal may include a message such as “Please brighten the room lighting”. The message is superimposed on the screen 10 and displayed.
  • the MPU 520 When the human sensor 601 detects that the user has left the display device 1 and moved out of the predetermined range, the MPU 520 starts measuring time using an internal timer (not shown). The MPU 520 automatically turns off the power of the display device 1 if a person does not enter the predetermined range again after a predetermined time has elapsed. That is, the display device 1 has a so-called auto shut-off function.
  • FIG. 23 shows a first processing flow of the display device 1 according to the third embodiment.
  • the processing in FIG. 23 is mainly performed by the MPU 520.
  • FIG. 23 includes steps S1 to S6. Hereinafter, it demonstrates in order of a step.
  • the display device 1 determines from the video captured by the camera whether the lighting state of the room is sufficiently bright for the camera to capture the user's face and head. If the brightness is sufficient (Y), the process proceeds to S4.
  • the display device 1 If the brightness is insufficient (N), the display device 1 displays a high-luminance video on the screen 10 in S3. Alternatively, a message or the like is displayed for the user to brighten the room.
  • the display device 1 performs a remote operation control process using the camera captured image. This process is shown as a second process flow in FIG.
  • the display device 1 determines whether to end the remote operation control of the display system. For example, when the user inputs an end operation, when the absence of the user is detected from the captured video of the camera, or when the presence of a surrounding person is detected by the human sensor 601, the display device 1 ends the display system. It judges that it will make it, and it changes to S6. If not completed (N), the process of S4 is repeated.
  • the display device 1 turns off the display function from the main power supply unit 602 and the power supply to the two cameras, and puts the display device 1 into a standby state. After the end, the same is repeated from S1.
  • FIG. 24 shows a second processing flow of the display device 1 according to the third embodiment, which is mainly processing by the MPU 520.
  • FIG. 24 includes steps S11 to S22. Hereinafter, it demonstrates in order of a step.
  • the display device 1 detects the user's face and fingers from the camera-captured video, and detects the distance and position to the point P0 and the point F0 by distance measurement processing based on binocular parallax.
  • the display device 1 extracts a facial feature in the facial image from the camera-captured video, compares it with the registered facial feature data, and performs personal recognition processing for identifying the individual user.
  • the display device 1 refers to the virtual surface information set for each individual user from the nonvolatile data memory 521.
  • the display device 1 refers to the standard virtual surface information, which is the default value, from the nonvolatile data memory 521. Even when the virtual surface information for each user is not registered, the standard virtual surface information is referred to.
  • the virtual plane information includes lengths L1 and L2.
  • the display device 1 uses the virtual plane information to move the virtual plane from the point P0, which is the user side reference point, to the position of the predetermined lengths L1 and L2 on the reference axis J0 toward the point Q0.
  • First virtual surface 201 and second virtual surface 202 are set.
  • the display device 1 calculates the distance DST between the position of the finger point F0 and the second virtual surface 202, and the like. The display device 1 uses the distance DST to determine the degree and depth of the finger entering the virtual surface space.
  • the display device 1 determines whether or not the finger has entered the second space 102 behind the first virtual plane 201. If it has entered the second space 102 (Y), the process proceeds to S19, and if not (N), the process proceeds to S20.
  • the display device 1 displays the cursor in a state corresponding to the distance DST in the menu screen of the screen 10 in correspondence with the position of the finger. At that time, the display device 1 gives operation input information 301 including cursor display control information to the GUI display unit 14.
  • the display device 1 determines whether or not the finger has entered the third space 103 behind the second virtual plane 202. If it has entered the third space 103 (Y), the process proceeds to S21, and if not (N), the process ends.
  • the display device 1 determines a predetermined operation such as a touch operation on the second virtual surface 202 as shown in FIG.
  • a predetermined operation such as a touch operation on the second virtual surface 202 as shown in FIG.
  • the display device 1 gives operation input information 301 including operation information representing the predetermined operation to the GUI display unit 14.
  • the display device 1 causes the GUI display unit 14 to perform corresponding processing according to a predetermined operation on the GUI object. Further, a prescribed operation of the display device 1 is executed according to the handling process.
  • the GUI display unit 14 is realized by the MPU 520, and the MPU 520 generates the operation input information 301 and performs the GUI display control process by itself.
  • Examples of the operation of the display device 1 according to a predetermined operation include various operations that can be operated from an operation menu of an existing remote control device in addition to a general switching operation of video, audio, digital input, and the like. Is possible. By using the remote control of this display system, it is possible to give an operation instruction without using an existing remote control device.
  • remote control can be realized without using a remote control device, and power saving can also be realized.
  • FIGS. 4 A display device and the like according to the fourth embodiment of the present invention will be described with reference to FIGS.
  • a projector projection display device
  • the projector has a function of projecting and displaying an image on the screen 10 of the screen 250 based on digital input data or the like.
  • the display device 1 is installed on the ceiling by the mounting tool 251 and performs projection display on the screen 250 in front of it.
  • two cameras are provided in the housing of the display device 1. Cameras 21 and 22 are arranged at the left and right points Qa and Qb of the housing. The orientations of the two cameras are set so that the photographing range is the lower part from the ceiling and the front part from the screen 250, and the orientations can be adjusted. Images taken from the cameras 21 and 22 are images from slightly above the user.
  • FIG. 25 shows a case where there are users A, B, and C as a plurality of users who view the projected video on the screen 10 of the screen 250.
  • a state in which user A performs a remote operation on the virtual surface space of the space 100 as in the first embodiment is shown.
  • Users B and C on both sides of the user A are watching the projected video on the same screen 10.
  • User B has long bangs and user C is wearing a hat with a brim.
  • FIG. 26 shows an XZ plane corresponding to a photographed image obtained by viewing the space 100 below from the two cameras of the ceiling display device 1 corresponding to the scene of FIG. Similar to the first embodiment, it is possible to detect the user-side reference point and finger position of each user by distance measurement based on binocular parallax in the analysis of camera-captured images.
  • the virtual surface space 102a is set in the space 100a of the user A.
  • a virtual plane space 102b is set in the user B space 100b.
  • a virtual surface space 102c is set in the space 100c of the user C.
  • the camera near the ceiling is located above the user's head, but the principle of distance measurement based on binocular parallax can be applied in the same way.
  • the user's face, both eyes, and the like are hardly captured in the camera-captured image.
  • the photographed image as shown in FIG. 26 both eyes of the face of the user B are hidden by the bangs.
  • User C's face is hidden by a hat.
  • a user-side reference point can be set as in the first embodiment. Even when both eyes are not captured, the user-side reference point can be set by a predetermined method. For example, when a skin color portion of the face can be detected, a user-side reference point may be set for the portion.
  • a part such as a head, hair, or hat may be detected, and a user-side reference point may be set, for example, at the tip position in the Z direction with respect to that part.
  • a user-side reference point may be set, for example, at the tip position in the Z direction with respect to that part.
  • the characteristics of the head and body viewed from above are extracted. For example, one point of the tip of the head is point P0. Set as.
  • the display device 1 also knows in advance the positional relationship between the camera-side reference point near the ceiling and the center point Q0 of the screen 10.
  • a reference axis J0 can be set between the point Q0 and the point P0, which is the user-side reference point of each user, and a virtual plane can be set on the reference axis J0.
  • the display device 1 may perform a remote operation control process in accordance with the determination of the extension / contraction of the user's arm.
  • a cursor corresponding to the position of the finger point F0 can be projected and displayed on the point Kx on the screen 10 as a substitute for a conventional laser pointer or the like.
  • the user can point a portion to be pointed on the screen 10 with the cursor in accordance with the remote operation in the virtual plane space.
  • Designation of the slide material currently projected, etc. can also be realized as a remote operation for a predetermined GUI.
  • the movement of people other than the viewer acts as a disturbance, making it difficult to detect the viewer's gesture.
  • the person other than the viewer includes, for example, a child playing in the same room, a person passing by, a person who is not watching the screen, and a person who is not aware of the operation.
  • the fourth embodiment even in an environment where there are a plurality of people, it is possible to perform a suitable remote operation with few erroneous operations.
  • the projector that is the display device 1 is installed on a table in front of the screen 250.
  • Two cameras 21 and 22 are arranged at points Qa and Qb which are positions on the left and right sides of the housing of the display device 1.
  • the display device 1 projects and displays an image on the screen 10 of the front screen 250.
  • the cameras 21 and 22 capture a predetermined range including the user ahead from the screen 10.
  • the remote operation can be realized as in the first embodiment.
  • the projector that is the display device 1 is arranged on a table in front of the screen 250.
  • the two cameras are arranged on the screen 250 at, for example, the upper right and upper left positions.
  • a camera 21 is attached to the upper right point Qa, and a camera 22 is attached to the upper left point Qb.
  • the housing of the display device 1 and the two cameras are connected by wirings 281 and 282.
  • the wirings 281 and 282 are wired cables including signal lines and power supply lines.
  • a camera image signal is transmitted to the display device 1 through wirings 281 and 282.
  • remote control can be realized as in the first embodiment.
  • the arrangement positions of the two cameras can be adjusted.
  • the distance between the two cameras can be increased, and the shooting range can be increased.
  • the wirings 281 and 282 can be omitted.
  • the display device 1 according to the seventh embodiment has a basic configuration similar to that of the first embodiment, and additionally has a remote control function when a plurality of users simultaneously use remote operation.
  • a plurality of users are viewing the screen 10 of the display device 1 in a meeting or the like.
  • content such as slide material is displayed.
  • This display system can also be used for a conference system and the like.
  • a virtual plane of the space 100 is set for each of a plurality of users.
  • a part of the virtual surfaces for example, the second virtual surface 202 of the user B and the second virtual surface 202 of the user C are omitted.
  • the display device 1 gives an operation authority as a representative operator to only one user. Of the plurality of users, for example, the case where the user B first performed an operation on the virtual surface space at the first time point is shown. Other users are not touching the virtual surface space. The finger of the user B enters the first virtual surface 201 and performs a touch operation on the second virtual surface 202 or the like.
  • the display device 1 gives the operation authority as the current representative operator to the user B who first enters the virtual plane space. Corresponding to the operation of the user B, a cursor 291 of the representative operator is displayed on the screen 10.
  • an image 292 representing the user B who is the representative operator who has the current operation authority and is performing remote operation is displayed at a part of the screen 10.
  • a photographed image of the camera is used.
  • a trimmed image of a part of the captured image is used.
  • the image 292 may be a registered face image for personal recognition, or may be other information such as an icon or a mark for each user. With the image 292, the situation of who the current remote operator is can be commonly recognized by all users.
  • the user C performs an operation on the virtual surface space of the user C.
  • the finger of user C enters the virtual surface space through the first virtual surface 201.
  • the display device 1 does not give the operation authority to the user C and invalidates the remote operation.
  • the cursor corresponding to the position of the finger of the user C is not displayed.
  • a plurality of cursors for each user are displayed on the screen 10.
  • simultaneous remote operation is allowed for each of a plurality of users.
  • Each user performs remote operations in order as necessary.
  • a plurality of cursors are displayed on the screen 10 and the display state becomes excessive, but this is useful depending on the usage method.
  • a suitable remote operation is possible even when a plurality of users perform a remote operation.
  • the display device 1 may set priorities among a plurality of users. When a plurality of users put their fingers on each virtual surface at the same time, for example, only the user with the highest priority is given the operation authority according to the priority setting. In addition, when a first user with a lower priority is remotely operated and a second user with a higher priority places a finger on the virtual surface, the operation authority is transferred from the first user to the second user. Good.
  • the display device 1 detects a face and a head portion and a finger and arm portion of the user based on the analysis of the camera photographed image. At that time, the display device 1 determines the continuity and identity of the user's individual body. That is, for example, it is determined whether or not the arms and fingers are continuously connected from the face area in the photographed image.
  • the face in the camera-captured image and the finger in the virtual plane space may not belong to the same user during remote operation.
  • the user A may extend his arm from the side of the user B and touch the virtual surface of the user B with his / her finger.
  • the display device 1 also considers such a possibility, and determines the continuity and identity of the user's individual body when analyzing the camera-captured image. If the display device 1 determines that the face and fingers in the camera-captured image are from different users, the display device 1 invalidates the remote operation.
  • the display system detects not only a finger but also an object such as a stick during analysis of a camera-captured image, determines the continuity between the finger and the object, for example, determines the position of the tip of the object.
  • a point to be represented is detected as a point F0.
  • the display system may detect the color or shape of the object and use it for control.
  • DESCRIPTION OF SYMBOLS 1 ... Display apparatus, 10 ... Screen, 21, 22 ... Camera, 100 ... Space, 101 ... 1st space, 102 ... 2nd space, 103 ... 3rd space, 201 ... 1st virtual surface, 202 ... 2nd virtual surface , J0: reference axis, P0, Q0 to Q8, F0: point.

Abstract

Provided is a feature with which the necessary amount of learning is small and it is possible to improve the ease of use by a user pertaining to a feature for controlling the user's remote operation on the screen of a display device. A display device provided with cameras 21, 22 for imaging an area that includes a user viewing a screen 10, the device detecting the positions of a first point (point P0) that is a user-side reference point against a camera-side reference point (point Q0) and a second point (point F0) of a user's finger by analysis of the captured images of the two cameras, setting a virtual planar space (second space 102) in a space 100 linking the screen 10 and the first point at a position of a prescribed length in the viewing direction from the first point so as to overlap the screen 10 as seen from the user, determining a prescribed remote operation that includes operation of touching the virtual planar space with a finger on the basis of a degree of penetration of a finger into the virtual planar space including the distance between the virtual planar space and the second point, generating operation input information that includes the position of the second point or a position in the screen 10 and operation information representing the prescribed remote operation, and controlling the operation of the display device.

Description

表示装置及び遠隔操作制御装置Display device and remote control device
 本発明は、表示装置の技術に関し、表示装置に対するユーザの操作を制御する技術に関する。 The present invention relates to a technology for a display device, and relates to a technology for controlling a user operation on the display device.
 テレビ受像機やプロジェクタ等を含む、表示装置等の電子機器は、マン・マシン・インタフェースとして、操作入力パネル、リモコン機器、画面のグラフィカル・ユーザ・インタフェース(GUI)等を有する。それらの操作入力手段を通じたユーザの入力操作によって、電子機器に各種の動作等を指示可能である。映像表示技術、カメラ撮像技術、半導体技術等の進歩や環境整備拡大に伴い、操作入力手段として従来のリモコン機器による操作よりも直感的で円滑に操作可能とするための技術も開発されている。例えば、音声認識機能によって電子機器を操作する技術等が挙げられる。そのような高度なインタフェース技術が将来に一般家庭に導入される可能性もある。このような状況でも、未だにリモコン機器による操作は一般的である。その理由は、手軽に利用でき、安定した動作が実現でき、廉価な部品による機器製造が可能であること等が大きい。 Electronic devices such as display devices including television receivers and projectors have an operation input panel, a remote control device, a graphical user interface (GUI) for a screen, and the like as man-machine interfaces. Various operations and the like can be instructed to the electronic apparatus by user input operations through these operation input means. With the advancement of video display technology, camera imaging technology, semiconductor technology, etc. and the expansion of the environment, a technology has been developed as an operation input means for enabling an intuitive and smooth operation as compared with the operation using a conventional remote control device. For example, the technique etc. which operate an electronic device with a voice recognition function are mentioned. There is a possibility that such advanced interface technology will be introduced to general households in the future. Even in such a situation, the operation using the remote control device is still common. The reason for this is that it can be used easily, can achieve stable operation, and can be manufactured with inexpensive parts.
 一方、一般に普及したカメラを応用して、表示装置の画面のGUI等に対する遠隔操作手段として利用する技術も提案されている。例えば、表示装置の近辺に備えるカメラによってユーザの所定の動きによるジェスチャを検出し、そのジェスチャを遠隔操作による動作指示として、表示装置の動作を制御するシステム等がある。そのシステムでは、画面から離れた位置にいるユーザが手を左右に振る等の所定のジェスチャを行う。そのシステムは、カメラ撮影画像の解析に基づいて、そのジェスチャを検出し、そのジェスチャに応じた動作指示を与える。 On the other hand, a technique has been proposed in which a widely used camera is applied and used as a remote operation means for a GUI of a display device screen. For example, there is a system that detects a gesture caused by a predetermined movement of a user using a camera provided in the vicinity of the display device, and controls the operation of the display device using the gesture as an operation instruction by remote operation. In the system, a user at a position away from the screen performs a predetermined gesture such as shaking his / her hand from side to side. The system detects the gesture based on the analysis of the camera-captured image, and gives an operation instruction corresponding to the gesture.
 カメラを用いて表示装置等に対する遠隔操作を実現する先行技術例として、特開平8-351541号公報(特許文献1)が挙げられる。特許文献1には、ジェスチャ認識システムとして、以下の旨が記載されている。そのシステムは、カメラを用いて、所定のハンドジェスチャの存在及び空間内の位置を検出してジェスチャ信号を発生し、ジェスチャ信号に基づいて、ハンドジェスチャが検出された空間内の位置と対応するオンスクリーンでの位置にハンドアイコンを表示する。そのシステムは、ハンドジェスチャの動きに従ってハンドアイコンが機械制御アイコン上に移動された時には、機械制御信号を発生する。 Japanese Patent Laid-Open No. 8-351541 (Patent Document 1) is cited as a prior art example that realizes remote control of a display device or the like using a camera. Patent Document 1 describes the following as a gesture recognition system. The system uses a camera to detect the presence of a predetermined hand gesture and a position in the space to generate a gesture signal, and based on the gesture signal, the on-state corresponding to the position in the space where the hand gesture is detected. Display the hand icon at the position on the screen. The system generates a machine control signal when the hand icon is moved over the machine control icon according to the movement of the hand gesture.
特開平8-315154号公報JP-A-8-315154
 従来技術例における表示装置の画面に対するユーザの遠隔操作を制御するシステムでは、ユーザの手指の所定の動きで規定される複数のジェスチャを設け、ジェスチャ毎に動作指示等を対応付ける。このようなシステムでは、ユーザは、遠隔操作の利用のためには、複数のジェスチャを、ジェスチャと動作指示との対応関係を含め、区別して覚える必要があり、必要な学習量が多い。誤操作を防ぐためにはある程度の習熟も必要である。このようなシステムにおける学習等の必要性は、リモコン機器の手軽さや利便性と比べると、かなり不利である。従来技術は、ユーザの使い勝手の点で課題があり、改善余地がある。 In a system for controlling a user's remote operation with respect to a screen of a display device in a prior art example, a plurality of gestures defined by a predetermined movement of a user's finger are provided, and an operation instruction or the like is associated with each gesture. In such a system, in order to use a remote operation, the user needs to memorize a plurality of gestures including the correspondence between gestures and operation instructions, and requires a large amount of learning. A certain level of proficiency is also required to prevent misoperation. The necessity for learning or the like in such a system is considerably disadvantageous as compared with the convenience and convenience of the remote control device. The prior art has problems in terms of user convenience and has room for improvement.
 また、そのようなシステムでは、常に、カメラの撮影画像の解析に基づいて、ユーザの動きから複数のジェスチャを区別して検出する処理を行う必要がある。そのため、計算機では、多大な処理が必要であり、負荷が高い。 Also, in such a system, it is always necessary to perform a process of distinguishing and detecting a plurality of gestures from the user's movement based on the analysis of the captured image of the camera. For this reason, the computer requires a large amount of processing and a high load.
 本発明の目的は、表示装置の画面に対するユーザの遠隔操作を制御する技術に関して、必要な学習量が少なく、ユーザの使い勝手を良好にできる技術を提供することである。 An object of the present invention is to provide a technology that can reduce the amount of learning required and improve the usability of the user with respect to the technology for controlling the user's remote operation on the screen of the display device.
 本発明のうち代表的な実施の形態は、表示装置及び遠隔操作制御装置であって、以下に示す構成を有することを特徴とする。 A representative embodiment of the present invention is a display device and a remote control device, and has the following configuration.
 一実施の形態の表示装置は、表示装置の画面に対するユーザの遠隔操作を制御する機能を持つ表示装置であって、画面を視聴するユーザを含む範囲を撮影する少なくとも2つのカメラを備え、前記2つのカメラの撮影映像の解析によって、前記2つのカメラのカメラ側基準点の位置に対する、前記ユーザの頭、顔、または眼に対応付けられたユーザ側基準点である第1点の位置と、前記ユーザの手指の第2点の位置と、を検出し、前記画面と前記第1点とを結ぶ空間内において、前記第1点から前記画面への視聴方向へ所定長の位置に、前記ユーザから見て前記画面に対して重なるように仮想面空間を設定し、前記仮想面空間の位置と前記第2点の位置との距離を含む、前記仮想面空間に対する前記手指の進入の度合いを計算し、前記進入の度合いに基づいて、前記仮想面空間に対する前記手指のタッチ操作を含む、所定の遠隔操作を判定し、前記第2点の位置または前記第2点の位置に対応付けられた前記画面内での位置座標、前記所定の遠隔操作を表す操作情報を含む、操作入力情報を生成し、前記操作入力情報によって前記表示装置の動作を制御する。 A display device according to an embodiment is a display device having a function of controlling a user's remote operation on a screen of the display device, and includes at least two cameras that capture a range including a user who views the screen. The position of a first point that is a user-side reference point associated with the head, face, or eye of the user with respect to the position of the camera-side reference point of the two cameras by analyzing the captured images of the two cameras; The position of the second point of the user's finger is detected, and within the space connecting the screen and the first point, from the user to a position of a predetermined length in the viewing direction from the first point to the screen. The virtual surface space is set so as to overlap with the screen when viewed, and the degree of the finger entering the virtual surface space including the distance between the position of the virtual surface space and the position of the second point is calculated. The approach Based on the degree, a predetermined remote operation including a touch operation of the finger with respect to the virtual surface space is determined, and the position of the second point or the position in the screen associated with the position of the second point Operation input information including coordinates and operation information representing the predetermined remote operation is generated, and the operation of the display device is controlled by the operation input information.
 本発明のうち代表的な実施の形態によれば、表示装置の画面に対するユーザの遠隔操作を制御する技術に関して、必要な学習量が少なく、ユーザの使い勝手を良好にできる。 According to a typical embodiment of the present invention, the amount of learning necessary for a technique for controlling the user's remote operation on the screen of the display device is small, and the user's usability can be improved.
本発明の実施の形態1の表示装置を含む、表示システムの構成を斜視で示す図である。1 is a perspective view showing a configuration of a display system including a display device according to a first embodiment of the present invention. 実施の形態1の表示装置を含む表示システムの機能ブロック構成を示す図である。1 is a diagram illustrating a functional block configuration of a display system including a display device according to Embodiment 1. FIG. 実施の形態1で、空間を横から見た状態を示す図である。In Embodiment 1, it is a figure which shows the state which looked at the space from the side. 実施の形態1で、仮想面空間等の詳細を示す説明図である。In Embodiment 1, it is explanatory drawing which shows the details, such as a virtual surface space. 実施の形態1で、空間を上から見た状態を示す図である。In Embodiment 1, it is a figure which shows the state which looked at the space from the top. 実施の形態1で、視聴位置が画面中央に対して角度を持つ場合を示す図である。In Embodiment 1, it is a figure which shows the case where a viewing-and-listening position has an angle with respect to the screen center. 実施の形態1で、ユーザの視界における画面と仮想面の重なりを示す図である。In Embodiment 1, it is a figure which shows the overlap of the screen and virtual surface in a user's visual field. 実施の形態1で、両眼視差に基づいた距離計測について示す図である。In Embodiment 1, it is a figure shown about the distance measurement based on binocular parallax. 実施の形態1で、カメラ撮影画像の解析に基づいたユーザ側基準点の設定について示す図である。In Embodiment 1, it is a figure shown about the setting of the user side reference point based on the analysis of a camera picked-up image. 実施の形態1で、手指位置に応じた画面表示制御例を示す図である。In Embodiment 1, it is a figure which shows the example of a screen display control according to a finger position. 実施の形態1で、カーソル表示制御例を示す図である。FIG. 10 is a diagram illustrating an example of cursor display control in the first embodiment. 実施の形態1で、カーソル表示制御例を示す図である。FIG. 10 is a diagram illustrating an example of cursor display control in the first embodiment. 実施の形態1で、空間の形状について示す図である。In Embodiment 1, it is a figure shown about the shape of space. 実施の形態1で、操作例に関する説明図である。In Embodiment 1, it is explanatory drawing regarding the example of operation. 実施の形態1で、ユーザ個人毎の仮想面調整について示す図である。In Embodiment 1, it is a figure shown about the virtual surface adjustment for every user individual. 実施の形態1の第1変形例における、カメラ配置、及び姿勢例を示す図である。FIG. 11 is a diagram illustrating an example of camera arrangement and posture in a first modification of the first embodiment. 第1変形例で、仮想面補正について示す図である。It is a figure shown about virtual surface correction in the 1st modification. 実施の形態1の第2変形例における、空間の単一の仮想面、及び表示制御例を示す図である。It is a figure which shows the single virtual surface of the space in the 2nd modification of Embodiment 1, and the example of display control. 実施の形態1の第3変形例における、仮想面調整例を示す図である。FIG. 10 is a diagram illustrating an example of virtual surface adjustment in the third modification example of the first embodiment. 本発明の実施の形態2の遠隔操作制御装置を含む、表示システムの機能ブロック構成を示す図である。It is a figure which shows the functional block structure of a display system including the remote control device of Embodiment 2 of this invention. 本発明の実施の形態3の表示装置を含む、表示システムの構成を示す図である。It is a figure which shows the structure of the display system containing the display apparatus of Embodiment 3 of this invention. 実施の形態3の表示装置の構成を示す図である。6 is a diagram illustrating a configuration of a display device according to Embodiment 3. FIG. 実施の形態3の表示装置の第1処理フローを示す図である。FIG. 10 is a diagram showing a first processing flow of the display device according to the third embodiment. 実施の形態3の表示装置の第2処理フローを示す図である。FIG. 10 is a diagram showing a second processing flow of the display device according to the third embodiment. 本発明の実施の形態4の表示装置を含む表示システムの構成を示す図である。It is a figure which shows the structure of the display system containing the display apparatus of Embodiment 4 of this invention. 実施の形態4で、空間をカメラから見た状態を示す図である。In Embodiment 4, it is a figure which shows the state which looked at the space from the camera. 本発明の実施の形態5の表示装置を含む表示システムの構成を示す図である。It is a figure which shows the structure of the display system containing the display apparatus of Embodiment 5 of this invention. 本発明の実施の形態6の表示装置を含む表示システムの構成を示す図である。It is a figure which shows the structure of the display system containing the display apparatus of Embodiment 6 of this invention. 本発明の実施の形態7の表示装置を含む表示システムの構成を示す図である。It is a figure which shows the structure of the display system containing the display apparatus of Embodiment 7 of this invention.
 以下、本発明の実施の形態を図面に基づいて詳細に説明する。なお、実施の形態を説明するための全図において同一部には原則として同一符号を付し、その繰り返しの説明は省略する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Note that components having the same function are denoted by the same reference symbols throughout the drawings for describing the embodiment, and the repetitive description thereof will be omitted.
 (実施の形態1)
 図1~図19を用いて、本発明の実施の形態1の表示装置について説明する。実施の形態1の表示装置は、画面に対する遠隔での操作入力を可能とする遠隔操作制御機能を備える表示装置である。
(Embodiment 1)
The display device according to the first embodiment of the present invention will be described with reference to FIGS. The display device of the first embodiment is a display device having a remote operation control function that enables remote operation input to the screen.
 [表示システム(1)]
 図1は、実施の形態1の表示装置を含む、表示システムの構成を示す。図1では、ユーザが表示装置1の画面10を見ている状態を、斜視で概略的に示している。なお、説明上の方向及び座標系として(X,Y,Z)を有し、X方向は第1方向、Y方向は第2方向、Z方向は第3方向である。X方向及びY方向は、画面10を構成する直交する2つの方向であり、X方向は画面内水平方向、Y方向は画面内垂直方向である。Z方向は、画面10のX方向及びY方向に対して垂直な方向である。
[Display system (1)]
FIG. 1 shows a configuration of a display system including the display device of the first embodiment. In FIG. 1, a state in which the user is looking at the screen 10 of the display device 1 is schematically shown in perspective. In addition, it has (X, Y, Z) as an explanatory direction and a coordinate system, the X direction is the first direction, the Y direction is the second direction, and the Z direction is the third direction. The X direction and the Y direction are two orthogonal directions constituting the screen 10, where the X direction is the horizontal direction within the screen and the Y direction is the vertical direction within the screen. The Z direction is a direction perpendicular to the X direction and the Y direction of the screen 10.
 図1の表示システムで、ユーザは、表示装置1の画面10の映像を視聴する視聴者であり、画面10に対する遠隔操作を行う操作者である。ユーザの視点に対応するユーザ側基準点を点P0で示す。点P0は、顔や頭のうちの一点であり、例えば両眼の中間点である。ユーザの視聴時の姿勢として画面10に正対しており、視聴方向がZ方向に沿っている場合を示す。ユーザによる遠隔操作の際の手指の位置を点F0で示す。点F0は、例えばZ方向で画面10に一番近い位置になる手指の先端点である。 In the display system of FIG. 1, the user is a viewer who views the video on the screen 10 of the display device 1 and is an operator who performs a remote operation on the screen 10. A user-side reference point corresponding to the user's viewpoint is indicated by a point P0. The point P0 is one point of the face or head, for example, the middle point between both eyes. A case where the user is facing the screen 10 as the viewing posture and the viewing direction is along the Z direction is shown. The position of the finger at the time of remote operation by the user is indicated by a point F0. Point F0 is the tip point of the finger that is closest to the screen 10 in the Z direction, for example.
 表示装置1は、例えばテレビ受像機であり、放送波を受信して放送番組等を再生表示する機能や、デジタル入力データに基づいて映像を再生表示する機能等を有する。画面10は、矩形の平面を有し、中央の点を点Q0とし、4つの角点を点Q1~Q4とする。右辺の中間点を点Q5、左辺の中間点を点Q6、上辺の中間点を点Q7、下辺の中間点を点Q8とする。 The display device 1 is, for example, a television receiver, and has a function of receiving broadcast waves and reproducing and displaying broadcast programs, a function of reproducing and displaying video based on digital input data, and the like. The screen 10 has a rectangular plane, and a central point is a point Q0 and four corner points are points Q1 to Q4. The middle point on the right side is point Q5, the middle point on the left side is point Q6, the middle point on the upper side is point Q7, and the middle point on the lower side is point Q8.
 表示装置1は、2つのカメラであるカメラ21及びカメラ22を備えている。2つのカメラは、点Q7と点Q8を結ぶ軸に対して左右の位置に配置されている。表示装置1は、2つのカメラの撮影映像を用いて、対象物体に関する両眼視差に基づいた距離計測を行う機能を有する。対象物体は、ユーザの眼や手指である。カメラ21は、右側カメラであり、画面10の右辺の中間の点Q5に配置されている。カメラ22は、左側カメラであり、画面10の左辺の中間の点Q6に配置されている。2つのカメラは、画面10の前方のユーザを含む所定の範囲を撮影する向きで配置されており、その向きは調整可能である。カメラの撮影映像の動画や静止画内には、対象としてユーザが含まれる。2つのカメラにおけるカメラ側基準点は、点Q5と点Q6との中間点である点Q0である。 The display device 1 includes a camera 21 and a camera 22 which are two cameras. The two cameras are arranged on the left and right positions with respect to the axis connecting the points Q7 and Q8. The display device 1 has a function of performing distance measurement based on binocular parallax related to a target object using captured images of two cameras. The target object is the user's eyes and fingers. The camera 21 is a right camera and is disposed at a point Q5 in the middle of the right side of the screen 10. The camera 22 is a left camera and is disposed at a point Q6 in the middle of the left side of the screen 10. The two cameras are arranged in an orientation for photographing a predetermined range including the user in front of the screen 10, and the orientations can be adjusted. The user is included in the moving image or still image of the video captured by the camera. The camera side reference point in the two cameras is a point Q0 that is an intermediate point between the points Q5 and Q6.
 表示装置1は、2つのカメラの撮影映像の解析処理において、両眼視差に基づいた距離計測によって、対象物体との距離及び位置を検出する。両眼視差に基づいた距離計測は、公知の原理を適用できる。この距離は、カメラ側基準点である点Q0と、ユーザ側基準点である点P0との距離や、点Q0と手指の点F0との距離である。これらの距離から、点P0や点F0の位置が、3次元空間の座標系(X,Y,Z)の位置座標として得られる。表示装置1は、時点毎に点P0や点F0の位置を検出する。 The display device 1 detects the distance and position from the target object by distance measurement based on binocular parallax in the analysis processing of the captured images of the two cameras. A known principle can be applied to distance measurement based on binocular parallax. This distance is the distance between the point Q0 that is the camera side reference point and the point P0 that is the user side reference point, or the distance between the point Q0 and the finger point F0. From these distances, the positions of the points P0 and F0 are obtained as the position coordinates of the coordinate system (X, Y, Z) in the three-dimensional space. The display device 1 detects the positions of the point P0 and the point F0 for each time point.
 ユーザ側基準点である点P0は、頭や顔のうちの一点、例えば両眼の中間点として設定される。図1では、両眼の中間点を、ユーザ側基準点である点P0として代表させている。2つのカメラの撮影映像から両眼が検出できる場合、その点P0を設定可能である。 The point P0 which is the user side reference point is set as one point of the head or face, for example, the middle point between both eyes. In FIG. 1, the middle point of both eyes is represented as a point P0 which is a user side reference point. When both eyes can be detected from the images captured by the two cameras, the point P0 can be set.
 ユーザが画面10を見る場合、画面10の4つの点Q1~Q4を底面とし、ユーザ側基準点である点P0を頂点とした、四角錘形状の空間100を有する。表示装置1は、点P0の位置の検出に基づいて、点P0と画面10との間に形成される空間100を把握する。点P0と点Q0とを結ぶ直線を、基準軸J0として示す。基準軸J0の方向は、ユーザが画面10の点Q0を見る場合の視聴方向と対応している。表示装置1は、点P0及び空間100の把握に基づいて、空間100内に、特有の仮想面を設定する。この仮想面は、画面10に対するユーザの遠隔操作を受け付ける仮想的な面である。図1では、空間100のうち、第1仮想面201と第2仮想面202とで挟まれる空間が、仮想面空間である第2空間102として設定されている。空間100において、点P0から基準軸J0上で点Q0の方向へ所定長の位置に、仮想面基準点を設け、2つの仮想面として、第1仮想面201、第2仮想面202が設定される。仮想面空間は、所定の厚さを持つ概略平板形状の空間である。 When the user views the screen 10, the space 100 has a quadrangular pyramid shape with the four points Q1 to Q4 on the screen 10 as the bottom surface and the point P0 as the user side reference point as the apex. The display device 1 grasps the space 100 formed between the point P0 and the screen 10 based on the detection of the position of the point P0. A straight line connecting the point P0 and the point Q0 is indicated as a reference axis J0. The direction of the reference axis J0 corresponds to the viewing direction when the user views the point Q0 on the screen 10. The display device 1 sets a specific virtual plane in the space 100 based on the grasp of the point P0 and the space 100. This virtual surface is a virtual surface that accepts a user's remote operation on the screen 10. In FIG. 1, a space between the first virtual surface 201 and the second virtual surface 202 in the space 100 is set as a second space 102 that is a virtual surface space. In the space 100, a virtual surface reference point is provided at a position of a predetermined length from the point P0 on the reference axis J0 in the direction of the point Q0, and the first virtual surface 201 and the second virtual surface 202 are set as two virtual surfaces. The The virtual plane space is a substantially flat space having a predetermined thickness.
 空間100において、点P0から第1仮想面201までの四角錘形状の空間を第1空間101とし、仮想面空間を第2空間102とし、第2仮想面202から画面10までの空間を第3空間103とする。ユーザが画面10を見る視界においては、画面10上に仮想面がそのまま重なっている。表示装置1のプロセッサは、計算によって、このような空間100及び仮想面空間を、3次元空間の位置座標で把握する。 In the space 100, a quadrangular pyramid-shaped space from the point P0 to the first virtual surface 201 is a first space 101, a virtual surface space is a second space 102, and a space from the second virtual surface 202 to the screen 10 is a third space. A space 103 is assumed. In a field of view where the user views the screen 10, the virtual plane is directly overlapped on the screen 10. The processor of the display device 1 grasps such a space 100 and the virtual plane space with the position coordinates of the three-dimensional space by calculation.
 表示装置1は、手指の点F0と仮想面空間との位置関係、仮想面空間に対する手指の進入の度合いに基づいて、手指による所定の操作を判定し検出する。視聴方向において、仮想面に対する手指の点F0の進入の度合いが、例えば点F0の位置と第2仮想面202の位置との距離として検出される。所定の操作は、例えば第2仮想面202に対するタッチ、タップ、スワイプ等の操作である。表示装置1は、仮想面空間に対する手指の位置及び所定の操作を検出し、その検出した位置及び所定の操作を表す操作入力情報を生成する。表示装置1は、その操作入力情報を用いて、表示装置1の動作や画面10のGUIを制御する。例えば、表示装置1は、画面10のGUIのオブジェクト、例えば選択肢ボタンに対するタッチ操作である場合に、そのオブジェクトのタッチ操作に応じた対応処理、例えば選択肢の選択決定処理を行う。 The display device 1 determines and detects a predetermined operation by the finger based on the positional relationship between the finger point F0 and the virtual surface space and the degree of the finger entering the virtual surface space. In the viewing direction, the degree of entry of the finger point F0 with respect to the virtual plane is detected as, for example, the distance between the position of the point F0 and the position of the second virtual plane 202. The predetermined operation is, for example, an operation such as touching, tapping, and swiping on the second virtual surface 202. The display device 1 detects the position of a finger and a predetermined operation with respect to the virtual surface space, and generates operation input information representing the detected position and the predetermined operation. The display device 1 controls the operation of the display device 1 and the GUI of the screen 10 using the operation input information. For example, when a touch operation is performed on a GUI object on the screen 10, for example, a selection button, the display device 1 performs a corresponding process according to the touch operation on the object, for example, a selection selection process for the option.
 図1では、ユーザが標準的な姿勢で画面10の映像を視聴している。このように通常の視聴の場合、空間100内には視聴の妨げになる障害物は無い。実施の形態1の表示システムでは、この空間100内におけるユーザの手指の動き、特に仮想面空間である第2空間102での動きから、画面10のGUI等に対するユーザの操作の意図を、遠隔操作として検出する。 In FIG. 1, the user is viewing the video on the screen 10 in a standard posture. Thus, in normal viewing, there are no obstacles in the space 100 that hinder viewing. In the display system of the first embodiment, the intention of the user's operation on the GUI or the like of the screen 10 is remotely controlled from the movement of the user's fingers in this space 100, particularly the movement in the second space 102 which is a virtual plane space. Detect as.
 指先が第1仮想面201に到達した場合、表示装置1は、自動的に、遠隔操作を受け付ける状態に移行し、画面10に特有のカーソルを表示させる。カーソルは、仮想面空間でのユーザの手指の存在を表すポインタ画像であり、ユーザに遠隔操作の状態をフィードバックするための情報である。仮想面空間でのXY平面での位置と、画面10のGUI等のXY平面での位置とは、対応付けられて管理されている。カーソルは、仮想面空間での手指の位置に対応付けられた画面10のGUI等の位置に表示される。 When the fingertip reaches the first virtual surface 201, the display device 1 automatically shifts to a state of accepting a remote operation, and displays a unique cursor on the screen 10. The cursor is a pointer image representing the presence of the user's finger in the virtual plane space, and is information for feeding back the remote operation state to the user. The position on the XY plane in the virtual plane space and the position on the XY plane such as the GUI of the screen 10 are managed in association with each other. The cursor is displayed at a position such as a GUI on the screen 10 associated with the position of the finger in the virtual plane space.
 [表示システム(2)]
 図2は、図1の表示システムの表示装置1の機能ブロック構成を示す。表示装置1は、制御部11、記憶部12、コンテンツ表示部13、GUI表示部14、表示駆動回路15、画面10、遠隔操作制御部20を有する。遠隔操作制御部20は、カメラ21、カメラ22、両眼視差計算部23、仮想面操作判定部24、操作入力情報出力部25、個人認識部26、仮想面設定部27、仮想面調整部28を有する。
[Display system (2)]
FIG. 2 shows a functional block configuration of the display device 1 of the display system of FIG. The display device 1 includes a control unit 11, a storage unit 12, a content display unit 13, a GUI display unit 14, a display drive circuit 15, a screen 10, and a remote operation control unit 20. The remote operation control unit 20 includes a camera 21, a camera 22, a binocular parallax calculation unit 23, a virtual surface operation determination unit 24, an operation input information output unit 25, an individual recognition unit 26, a virtual surface setting unit 27, and a virtual surface adjustment unit 28. Have
 制御部11は、表示装置1の全体を制御する。制御部11は、遠隔操作がされた場合には、その操作入力情報301に応じて表示装置1の動作を制御する。記憶部12は、制御用の情報や、コンテンツ等のデータを記憶する。コンテンツ表示部13は、コンテンツデータに基づいて画面10にコンテンツの映像を表示する。コンテンツは、放送番組、DVD映像、資料ファイル等の各種が可能である。 The control unit 11 controls the entire display device 1. When a remote operation is performed, the control unit 11 controls the operation of the display device 1 according to the operation input information 301. The storage unit 12 stores control information and content data. The content display unit 13 displays content video on the screen 10 based on the content data. The content can be various types such as a broadcast program, a DVD video, and a material file.
 GUI表示部14は、表示装置1のOSまたはアプリ等において、画面10のGUIの画像表示を制御する処理や、GUIのオブジェクトに対する操作に応じた規定の対応処理等を行う部分である。GUI表示部14は、画面10にGUIの画像を表示する。GUIは、所定のメニュー画面等でもよい。GUIは、コンテンツ上に重畳表示されるものでもよい。GUI表示部14は、操作入力情報301で示すオブジェクトの操作に基づいて、そのオブジェクト(例えば選択肢ボタン)に対する操作(例えばタッチ操作)に応じた規定の対応処理(例えば選択決定処理)を行い、GUIの表示状態(例えば選択肢ボタン押下及び選択決定のエフェクト表示)を制御する。また、GUI表示部14は、操作入力情報301のカーソル表示制御情報を用いて、GUI上のカーソルの表示状態を制御する。 The GUI display unit 14 is a part that performs processing for controlling the GUI image display on the screen 10, processing corresponding to regulations according to operations on GUI objects, and the like in the OS or application of the display device 1. The GUI display unit 14 displays a GUI image on the screen 10. The GUI may be a predetermined menu screen or the like. The GUI may be displayed superimposed on the content. Based on the operation of the object indicated by the operation input information 301, the GUI display unit 14 performs a prescribed corresponding process (for example, a selection determination process) in accordance with an operation (for example, a touch operation) for the object (for example, a selection button). The display state (for example, an effect display indicating that the option button is pressed and selected) is controlled. Further, the GUI display unit 14 controls the display state of the cursor on the GUI using the cursor display control information of the operation input information 301.
 表示駆動回路15は、コンテンツ表示部13やGUI表示部14からの入力映像データに基づいて、映像信号を生成して、画面10に映像を表示する。 The display driving circuit 15 generates a video signal based on the input video data from the content display unit 13 or the GUI display unit 14 and displays the video on the screen 10.
 ユーザは、図1の仮想面空間に対して操作を行う。その操作は、画面10に対する遠隔操作である。その操作は、遠隔操作制御部20によって検出、判定されて、画面10のGUIに対する操作入力として変換される。2つのカメラは、それぞれ、ユーザの顔や手指を含む範囲を撮影し、撮影映像を出力する。その撮影映像のデータは、両眼視差計算部23及び個人認識部24に入力される。 The user operates the virtual plane space in FIG. The operation is a remote operation on the screen 10. The operation is detected and determined by the remote operation control unit 20 and converted as an operation input to the GUI of the screen 10. Each of the two cameras captures a range including the user's face and fingers and outputs a captured image. The captured video data is input to the binocular parallax calculation unit 23 and the personal recognition unit 24.
 両眼視差計算部23は、撮影映像を解析して、顔や手指等の特徴を抽出し、両眼視差に基づいた距離計測の原理に基づいて、顔の点P0との距離や、手指の点F0との距離を計測し、点P0や点F0の位置座標を得る。両眼視差計算部23は、得た点P0の位置座標を含む視点位置情報を、仮想面設定部27へ出力し、また、得た点F0の位置座標を含む手指位置情報を、仮想面操作判定部24へ出力する。 The binocular parallax calculation unit 23 analyzes the captured video, extracts features such as the face and fingers, and based on the principle of distance measurement based on the binocular parallax, The distance to the point F0 is measured, and the position coordinates of the point P0 and the point F0 are obtained. The binocular parallax calculation unit 23 outputs the viewpoint position information including the obtained position coordinates of the point P0 to the virtual surface setting unit 27, and the finger position information including the obtained position coordinates of the point F0 is operated as a virtual surface operation. Output to the determination unit 24.
 仮想面設定部27は、その時点のユーザの視点位置情報を用いて、空間100に仮想面空間、即ち第1仮想面201及び第2仮想面202を設定する。仮想面設定部27によって設定された仮想面空間を表す仮想面情報は、仮想面操作判定部24に入力される。 The virtual plane setting unit 27 sets the virtual plane space, that is, the first virtual plane 201 and the second virtual plane 202 in the space 100 using the viewpoint position information of the user at that time. Virtual surface information representing the virtual surface space set by the virtual surface setting unit 27 is input to the virtual surface operation determination unit 24.
 仮想面操作判定部24は、仮想面情報と手指位置情報とに基づいて、仮想面空間に対する手指の位置関係や進入度合いの状態、及び所定の操作を判定する。仮想面操作判定部24は、例えば、第1仮想面201以降奥へ手指の点F0が進入したかどうかや、第2仮想面202以降奥へ点F0が進入したかどうか等、位置関係や進入度合いを判定する。また、仮想面操作判定部24は、その位置関係や進入度合いに基づいて、タッチ操作等の所定の操作が行われたかどうか等を判定する。所定の操作は、仮想面に対する操作である。所定の操作は、GUI表示部14では、GUIのオブジェクトに対する操作として解釈される。仮想面操作判定部24は、判定結果情報を、操作入力情報出力部25に与える。 The virtual surface operation determination unit 24 determines the positional relationship of the fingers with respect to the virtual surface space, the state of the approach, and a predetermined operation based on the virtual surface information and the finger position information. The virtual surface operation determination unit 24 determines the positional relationship and the entry, for example, whether or not the finger point F0 has entered the depth after the first virtual surface 201, or whether the point F0 has entered the depth after the second virtual surface 202. Determine the degree. Further, the virtual surface operation determination unit 24 determines whether or not a predetermined operation such as a touch operation has been performed based on the positional relationship and the degree of entry. The predetermined operation is an operation on the virtual surface. The predetermined operation is interpreted as an operation on the GUI object on the GUI display unit 14. The virtual surface operation determination unit 24 provides the determination result information to the operation input information output unit 25.
 操作入力情報出力部25は、判定結果情報に基づいて、操作入力情報301を生成し、GUI表示部14等へ出力する。操作入力情報出力部25は、仮想面空間での手指の点F0の位置座標と、それに対応付けられた画面10内の位置座標と、所定の操作の判定結果とに基づいて、操作入力情報301を生成する。 The operation input information output unit 25 generates operation input information 301 based on the determination result information, and outputs it to the GUI display unit 14 and the like. The operation input information output unit 25 operates based on the position coordinates of the finger point F0 in the virtual plane space, the position coordinates in the screen 10 associated with the position coordinates, and the determination result of the predetermined operation. Is generated.
 操作入力情報301は、例えば、時点毎の手指の点F0の位置座標情報、所定の操作を表す操作情報、カーソル表示制御情報、等を含む情報である。手指の位置座標情報(図3の(Xf,Yf,Zf))は、対応付けの変換によって、画面10内の位置座標情報(図7の(xf,yf))としてもよい。その場合、操作入力情報301は、画面10内の位置座標情報を含む。その対応付けの変換は、操作入力情報出力部25ではなくGUI表示部14で行うようにしてもよい。 The operation input information 301 is information including, for example, position coordinate information of the finger point F0 at each time point, operation information indicating a predetermined operation, cursor display control information, and the like. The position coordinate information of the finger ((Xf, Yf, Zf) in FIG. 3) may be the position coordinate information in the screen 10 ((xf, yf) in FIG. 7) by conversion of association. In that case, the operation input information 301 includes position coordinate information in the screen 10. The correspondence conversion may be performed not by the operation input information output unit 25 but by the GUI display unit 14.
 操作入力情報301は、所定の操作がされた場合にはその操作情報を含む。操作情報は、タッチ、タップ、スワイプ等の所定の操作の種別や有無等を表す情報である。操作入力情報301は、手指の位置に応じて、カーソル表示制御情報を含む。カーソル表示制御情報は、画面10でのカーソル表示状態を制御するための情報であり、カーソルの表示有無やサイズ等の指定の情報を含む。なお、操作入力情報301は、本例では、GUIに対する操作入力情報であるが、OS、アプリ、コンテンツ等に対する動作指示のコマンドや制御情報としてもよい。 The operation input information 301 includes operation information when a predetermined operation is performed. The operation information is information indicating the type, presence or absence of a predetermined operation such as touch, tap, swipe. The operation input information 301 includes cursor display control information according to the finger position. The cursor display control information is information for controlling the cursor display state on the screen 10 and includes designation information such as whether or not the cursor is displayed and the size. In this example, the operation input information 301 is operation input information for a GUI, but may be an operation instruction command or control information for an OS, an application, content, or the like.
 個人認識部26は、カメラの撮影映像の解析によって、ユーザ個人を認識する処理を行う。個人認識部26は、例えば、撮影映像内の顔画像から抽出される特徴と、既に登録されている顔画像の特徴とを比較照合して、ユーザ個人を判定、識別する。個人認識部26の認識結果情報は、仮想面設定部27に入力される。なお、個人認識部26は、顔画像から個人を認識する方式以外も適用可能であり、ユーザが入力するユーザID等を用いる方式や、各種の生体認証方式等を適用してもよい。 The personal recognition unit 26 performs processing for recognizing the individual user by analyzing the video captured by the camera. For example, the individual recognizing unit 26 compares and compares the feature extracted from the face image in the captured video with the feature of the already registered face image to determine and identify the individual user. The recognition result information of the personal recognition unit 26 is input to the virtual surface setting unit 27. The personal recognition unit 26 can be applied to a method other than a method for recognizing an individual from a face image, and a method using a user ID input by a user, various biometric authentication methods, or the like may be applied.
 ユーザ個人の顔画像の登録の仕方としては以下である。ユーザが、表示装置1のメニュー画面から、ユーザ設定、顔画像登録を選択する。表示装置1は、その選択に応じて、ユーザ設定画面を表示して、顔画像登録モードに入る。表示装置1は、そのモードで、ユーザに顔画像登録の旨のメッセージを表示し、カメラに顔を向けた状態としてもらう。表示装置1は、その状態で、カメラで顔を撮影して、その顔画像や、顔画像から抽出した特徴情報等を、ユーザ個人の顔情報として登録し、保持する。なお、このような登録を省略して、通常利用時の撮影画像から得られる顔画像を利用するようにしてもよい。 The method for registering the user's personal face image is as follows. The user selects user setting and face image registration from the menu screen of the display device 1. In response to the selection, the display device 1 displays a user setting screen and enters a face image registration mode. In this mode, the display device 1 displays a message indicating that the face image is registered to the user, and asks the camera to face the camera. In this state, the display device 1 shoots a face with a camera, and registers and holds the face image, feature information extracted from the face image, and the like as face information of the individual user. Note that such registration may be omitted, and a face image obtained from a captured image during normal use may be used.
 仮想面設定部27は、認識結果情報でユーザ個人が特定されている場合には、そのユーザ個人に合わせた仮想面を設定する。その場合、仮想面設定部27は、仮想面調整部28で既に設定されているそのユーザ個人の仮想面情報を参照する。仮想面設定部27は、ユーザ個人の手指の点F0に対して、そのユーザ個人に合わせて調整された仮想面を設定し、その仮想面情報を仮想面操作判定部24へ与える。 When the individual user is specified by the recognition result information, the virtual surface setting unit 27 sets a virtual surface according to the individual user. In that case, the virtual surface setting unit 27 refers to the virtual surface information of the individual user already set by the virtual surface adjustment unit 28. The virtual plane setting unit 27 sets a virtual plane adjusted according to the user's individual for the finger point F0 of the user's individual, and gives the virtual plane information to the virtual plane operation determination unit 24.
 仮想面調整部28は、ユーザ設定操作に基づいて、ユーザ個人毎に適した仮想面を調整する処理、言い換えるとキャリブレーションを行う。ユーザ個人毎の仮想面は、後述するが、基準軸J0上で仮想面の位置が標準位置から前後の好適な位置に調整されたものである。仮想面調整部28は、ユーザ個人毎に設定される調整後の仮想面情報を保持する。 The virtual surface adjustment unit 28 performs processing for adjusting a virtual surface suitable for each individual user, in other words, calibration based on the user setting operation. As will be described later, the virtual plane for each user is obtained by adjusting the position of the virtual plane on the reference axis J0 from the standard position to a suitable position before and after. The virtual surface adjustment unit 28 holds virtual surface information after adjustment set for each individual user.
 具体的な調整の仕方としては以下である。ユーザが、表示装置1のメニュー画面から、ユーザ設定、仮想面調整を選択する。その選択に応じて、ユーザ設定画面を表示して、仮想面調整モードに入る。そのモードで、ユーザに調整の旨のメッセージを表示し、ユーザが好適と感じる位置に手指を置いてもらう。表示装置1は、その状態をカメラで撮影して撮影映像から手指の形状や位置を検出する。表示装置1は、その手指の位置に合わせるように、仮想面の位置を調整し、そのユーザ個人の仮想面情報として登録し、保持する。 The specific adjustment method is as follows. The user selects user setting and virtual plane adjustment from the menu screen of the display device 1. In response to the selection, a user setting screen is displayed and the virtual plane adjustment mode is entered. In that mode, a message to the effect of adjustment is displayed to the user, and his / her finger is placed at a position that the user feels suitable. The display device 1 captures the state with a camera and detects the shape and position of a finger from the captured image. The display device 1 adjusts the position of the virtual surface so as to match the position of the finger, and registers and holds the virtual surface information of the individual user.
 なお、変形例として、仮想面操作判定部24をGUI表示部14に統合した形態としてもよい。その場合、遠隔操作制御部20は、操作入力情報301として少なくとも、手指の点F0の時点毎の位置座標情報をGUI表示部14に与える。GUI表示部14は、その操作入力情報301から、所定の操作を判定する。 As a modification, the virtual surface operation determination unit 24 may be integrated with the GUI display unit 14. In that case, the remote operation control unit 20 gives the GUI display unit 14 at least position coordinate information for each point of the finger point F0 as the operation input information 301. The GUI display unit 14 determines a predetermined operation from the operation input information 301.
 [空間(1)-水平]
 図3は、図1の表示システムの空間100等を、横、水平方向から見た状態、即ちX方向から見たYZ平面を示す。この状態では、空間100は、点Q7、点Q8、点P0を頂点とした三角形で示される。空間100内に、仮想面空間である第2空間102が設定されている。ユーザ側基準点である点P0の位置座標を(Xp,Yp,Zp)で示す。手指の点F0の位置座標を(Xf,Yf,Zf)で示す。画面10のY方向の幅における半分の長さをV1で示す。カメラ側基準点である点Q0からユーザ側基準点である点P0までの距離を、距離D1で示す。点Q0から手指の点F0までの距離を、距離D2で示す。
[Space (1)-Horizontal]
FIG. 3 shows a state in which the space 100 of the display system of FIG. In this state, the space 100 is indicated by a triangle having points Q7, Q8, and P0 as vertices. A second space 102 that is a virtual surface space is set in the space 100. The position coordinates of the point P0 that is the user side reference point are indicated by (Xp, Yp, Zp). The position coordinates of the finger point F0 are indicated by (Xf, Yf, Zf). A half length of the width of the screen 10 in the Y direction is indicated by V1. The distance from the point Q0 that is the camera side reference point to the point P0 that is the user side reference point is indicated by a distance D1. The distance from the point Q0 to the finger point F0 is indicated by a distance D2.
 なお、図3では、仮想面は、ユーザの視聴方向に対して垂直で、画面10に対して平行な平面として設定されている。仮想面は、ユーザの視聴方向に応じて、画面10に対して非平行な面として設定されてもよい。例えば、後述の図6に示す仮想面は、ユーザの視聴方向に対して垂直で、画面10に非平行な面である。ユーザ側基準点である点P0については、両眼の中間点に限らず設定可能であり、例えば頭や顔の中心点としてもよい。 In FIG. 3, the virtual plane is set as a plane that is perpendicular to the viewing direction of the user and parallel to the screen 10. The virtual plane may be set as a plane that is not parallel to the screen 10 according to the viewing direction of the user. For example, a virtual plane shown in FIG. 6 described later is a plane that is perpendicular to the viewing direction of the user and is not parallel to the screen 10. The point P0 that is the user-side reference point is not limited to the middle point between both eyes, and can be set, for example, the center point of the head or face.
 仮想面空間は、遠隔操作として所定の操作を可能としてその所定の操作を検出するための空間として設定されている。第1仮想面201は、特に、カーソル表示制御に係わる仮想面である。第2仮想面202は、特に、タッチ操作等の判定に係わる基準仮想面である。これらの2枚の仮想面は、基準軸J0上で、ユーザの手指が届く、ユーザ側基準点寄りの位置に設定されている。空間100及び仮想面は、当然ながら物としての実体は無く、表示装置1が計算上の情報として管理している。 The virtual plane space is set as a space for enabling a predetermined operation as a remote operation and detecting the predetermined operation. The first virtual surface 201 is particularly a virtual surface related to cursor display control. In particular, the second virtual surface 202 is a reference virtual surface related to determination of a touch operation or the like. These two virtual planes are set on the reference axis J0 at positions close to the user-side reference point where the user's fingers can reach. Naturally, the space 100 and the virtual plane do not have an entity as an object, and the display device 1 manages the information as calculation information.
 ユーザは、仮想面空間に対し、手指を上下左右前後に自由に動かし、例えば画面10のGUIのオブジェクト等の位置に対し、基準軸J0の方向で手指を前後に動かして、タッチ操作やタップ操作を行う。 The user freely moves his / her finger up / down / left / right / backward with respect to the virtual plane space, for example, moves the finger back and forth in the direction of the reference axis J0 with respect to the position of the GUI object or the like on the screen 10 to perform touch operation or tap operation. I do.
 両眼視差計算部23は、仮想面空間での手指の位置及び動きを検出する。両眼視差計算部23は、カメラ撮影画像の解析において、両眼視差に基づいた距離計測の処理によって、点Q0と顔の点P0との距離D1を計測し、点Q0と手指の点F0との距離D2を計測する。両眼視差計算部23は、点P0の位置を時点毎の位置座標(Xp,Yp,Zp)として検出し、点F0の位置を時点毎の位置座標(Xf,Yf,Zf)として検出する。また、両眼視差計算部23は、仮想面空間での手指の点F0の位置に対応付けられた、画面10内での位置座標(xf,yf)を把握する。 The binocular parallax calculation unit 23 detects the position and movement of the finger in the virtual plane space. The binocular parallax calculation unit 23 measures the distance D1 between the point Q0 and the face point P0 by the distance measurement process based on the binocular parallax in the analysis of the camera-captured image, and calculates the point Q0 and the finger point F0. The distance D2 is measured. The binocular parallax calculation unit 23 detects the position of the point P0 as position coordinates (Xp, Yp, Zp) for each time point, and detects the position of the point F0 as position coordinates (Xf, Yf, Zf) for each time point. Also, the binocular parallax calculation unit 23 grasps the position coordinates (xf, yf) in the screen 10 associated with the position of the finger point F0 in the virtual plane space.
 [空間(2)-距離、長さ、]
 図4は、同じく空間100等を横から見た状態で、拡大して、仮想面空間、手指の位置の例、各種の距離や長さ等を示す。図4では、手指の先端の点F0で第2仮想面202をタッチ操作する場合の様子を示す。図4では、Z方向の位置として、画面10の位置Z0を基準として0として、点P0の位置をZp、第1仮想面201の位置をZ1、第2仮想面202の位置をZ2、点F0の位置をZfで示す。
[Space (2)-Distance, Length]
FIG. 4 also shows a virtual surface space, examples of finger positions, various distances and lengths, and the like, while the space 100 is viewed from the side. FIG. 4 shows a state in which the second virtual surface 202 is touched at the point F0 at the tip of the finger. In FIG. 4, the position in the Z direction is 0 with respect to the position Z0 of the screen 10, the position of the point P0 is Zp, the position of the first virtual surface 201 is Z1, the position of the second virtual surface 202 is Z2, and the point F0 Is indicated by Zf.
 ユーザの動きに応じて、手指の点F0は、空間100外から空間100内に進入する。例えば、点F0は、最初、第1空間101内に進入し、次に第1空間101内から第1仮想面201を通過して第2空間102内に進入する。点F0は、次に第2空間102内から第2仮想面202を通過して第3空間103内に進入する。なお、手指の進入とは、Z方向に限らず、X方向やY方向を含めて、上下左右前後の自由な移動を含む。図4の例では、手指の位置の時系列の軌跡として、点F1~点F5を示す。最初、点F0は、空間100外にある。次に、空間100外の点から第1空間101内の点F1へ移動している。次に、点F1から第2空間102内の点F2へ移動している。次に、点F2から第3空間103内の点F3へ移動している。次に、点F3から第2空間102内の点F4へ移動している。次に、点F4から空間100外の点F5へ移動している。 The finger point F0 enters the space 100 from the outside of the space 100 according to the user's movement. For example, the point F0 first enters the first space 101, and then enters the second space 102 from the first space 101 through the first virtual plane 201. Next, the point F0 enters the third space 103 from the second space 102 through the second virtual plane 202. In addition, the approach of a finger includes not only the Z direction but also free movement up and down, left and right, and back and forth, including the X direction and the Y direction. In the example of FIG. 4, points F1 to F5 are shown as time-series trajectories of finger positions. Initially, the point F0 is outside the space 100. Next, the point moves from a point outside the space 100 to a point F1 in the first space 101. Next, the point F1 moves to a point F2 in the second space 102. Next, the point F2 moves to a point F3 in the third space 103. Next, the point F3 moves to a point F4 in the second space 102. Next, the point F4 moves to a point F5 outside the space 100.
 図4の例では、現在の手指の位置の点F0が、点F2にある状態を示している。その時の距離D2として、点Q0から点F2までの距離を示している。 In the example of FIG. 4, the point F0 of the current finger position is at the point F2. The distance from the point Q0 to the point F2 is shown as the distance D2 at that time.
 Z方向及び基準軸J0において、点P0から前方への所定長として、長さL1、長さL2を示す。Z方向における点P0から長さL1の位置を位置Z1で示し、対応する点を点C1で示す。Z方向における点P0から長さL2の位置を位置Z2で示し、対応する点を点C2で示す。位置Z1の点C1に、第1仮想面201の中心が配置される。位置Z2の点C2に、第2仮想面202の中心が配置される。 In the Z direction and the reference axis J0, a length L1 and a length L2 are shown as predetermined lengths forward from the point P0. The position of the length L1 from the point P0 in the Z direction is indicated by a position Z1, and the corresponding point is indicated by a point C1. The position of the length L2 from the point P0 in the Z direction is indicated by the position Z2, and the corresponding point is indicated by the point C2. The center of the first virtual surface 201 is arranged at the point C1 at the position Z1. The center of the second virtual plane 202 is arranged at the point C2 at the position Z2.
 距離DSTは、基準軸J0の方向における指先の点F0から第2仮想面202までの距離を示し、位置Zfと位置Z2との差分に対応し、正負の符号も持つ。点F0が第2仮想面202よりも手前の位置にある場合には距離DSTが正とする。両眼視差計算部23は、距離DSTを計算する。距離DSTは、仮想面空間に対する手指の位置関係、及び進入度合いを表す値である。 The distance DST indicates the distance from the point F0 of the fingertip to the second virtual plane 202 in the direction of the reference axis J0, corresponds to the difference between the position Zf and the position Z2, and has a positive / negative sign. If the point F0 is at a position before the second virtual plane 202, the distance DST is positive. The binocular parallax calculation unit 23 calculates the distance DST. The distance DST is a value representing the positional relationship of the fingers with respect to the virtual plane space and the degree of entry.
 厚さMは、基準軸J0の方向における仮想面空間の第1仮想面201と第2仮想面202との距離である厚さを示し、一例として5cm程度である。所定長の長さL1,L2に応じて、厚さMも設定可能である。 The thickness M indicates the thickness that is the distance between the first virtual surface 201 and the second virtual surface 202 in the virtual surface space in the direction of the reference axis J0, and is about 5 cm as an example. The thickness M can also be set according to the predetermined lengths L1 and L2.
 [空間(3)-鉛直]
 図5は、空間100等を、上、鉛直方向から見た状態、即ちY方向から見たXZ平面を示す。ユーザ側基準点である点P0は、概略的には、ユーザの頭の円周における画面10寄りの一方端に設定されている。画面10のX方向の幅の半分の長さをH1で示す。
[Space (3)-Vertical]
FIG. 5 shows a state in which the space 100 and the like are viewed from the top and the vertical direction, that is, the XZ plane viewed from the Y direction. The point P0, which is the user side reference point, is generally set at one end near the screen 10 on the circumference of the user's head. The half length of the width of the screen 10 in the X direction is indicated by H1.
 図5の例では、手指の点F0の位置の例として、点F11,F12,F13,F14を示す。軌跡として、例えば、第1空間101内の点F11から第2空間102内の点F12へ移動する。ユーザは、画面10のGUIのオブジェクト等の位置に応じて、第2空間102内で手指を上下左右、即ちX方向やY方向に自由に移動できる。例えば、点F12から点F14へ移動できる。ユーザは、画面10のGUIの所望のオブジェクトを選択する操作を行う場合には、そのオブジェクト上の位置へ手指を移動する。例えば、画面10のGUIの画像において、X方向で、図示する領域311に、オブジェクト(例えば選択肢ボタン)が配置されているとする。画面10の領域311に対応付けられた、第2仮想面202での領域312も示す。 In the example of FIG. 5, points F11, F12, F13, and F14 are shown as examples of the position of the finger point F0. For example, the locus moves from a point F11 in the first space 101 to a point F12 in the second space 102. The user can freely move his / her finger up and down, left and right, that is, in the X direction and the Y direction in the second space 102 in accordance with the position of the GUI object or the like on the screen 10. For example, it can move from point F12 to point F14. When the user performs an operation of selecting a desired object on the GUI of the screen 10, the user moves his / her finger to a position on the object. For example, in the GUI image on the screen 10, it is assumed that an object (for example, an option button) is arranged in an area 311 illustrated in the X direction. An area 312 on the second virtual plane 202 associated with the area 311 of the screen 10 is also shown.
 ユーザがそのオブジェクトを選択するためのタッチ操作を行うとする。ユーザは、仮想面空間で、手指を、領域312上の位置、例えば点F12に移動させる。そして、ユーザは、手指を、領域312を押すように動かす。これにより、手指の位置が、点F12から、第2仮想面202を通過して、第3空間103内の点F13へ移動する。この動きは、タッチ操作として判定、検出され、操作入力情報301として出力される。GUI表示部14では、このタッチ操作の操作入力情報301に基づいて、領域311のオブジェクトの選択決定処理を行う。なお、ユーザが続いてすぐに手指を点F13から第2空間102内の位置に戻した場合には、その動きはタップ操作として判定される。また、ユーザが続いてすぐに2回目のタップ操作を同様にした場合、その動きはダブルタップ操作として判定される。タップ操作の操作入力情報301を用いて同様にGUIの制御が可能である。 Suppose the user performs a touch operation to select the object. The user moves a finger to a position on the region 312, for example, the point F <b> 12 in the virtual plane space. Then, the user moves his / her finger so as to press the region 312. As a result, the position of the finger moves from the point F12 to the point F13 in the third space 103 through the second virtual plane 202. This movement is determined and detected as a touch operation, and is output as operation input information 301. The GUI display unit 14 performs an object selection determination process in the region 311 based on the operation input information 301 of the touch operation. When the user immediately returns his / her finger from the point F13 to a position in the second space 102, the movement is determined as a tap operation. Further, when the user performs the same second tap operation immediately after that, the movement is determined as a double tap operation. The GUI can be similarly controlled using the operation input information 301 of the tap operation.
 [視聴位置]
 図6は、同じく、空間100を上から見た状態で、ユーザの視聴位置が、画面10の中央に対して角度を持ってずれがある場合について示す。ユーザの眼等の位置は、必ずしも画面10と正対する場合だけとは限らず、このように画面10に対して角度を持つ場合もある。このような場合にも、基本的に同様に仮想面が設定され、遠隔操作が実現できる。図6の例では、画面10の中央の点Q0から垂直な直線320に対し、ユーザ側基準点である点P0との基準軸J0が、角度βでずれている。このずれは、ある程度の角度βの範囲内であれば、両眼視差に基づいた距離計測が可能であり、同様に遠隔操作が実現できる。
[Viewing position]
FIG. 6 similarly shows a case where the viewing position of the user has an angle with respect to the center of the screen 10 with the space 100 viewed from above. The position of the user's eyes or the like is not necessarily limited to just facing the screen 10, and may have an angle with respect to the screen 10 in this way. In such a case, a virtual plane is basically set in the same manner, and remote operation can be realized. In the example of FIG. 6, the reference axis J0 with respect to the point P0 which is the user side reference point is shifted by an angle β with respect to the straight line 320 perpendicular to the center point Q0 of the screen 10. If this deviation is within a certain range of angle β, distance measurement based on binocular parallax can be performed, and remote operation can be similarly realized.
 図6の状態で、カメラ撮影画像に基づいて、ユーザ側基準点である点P0が検出され、点Q0と点P0との距離D1が計測される。点P0と点Q0とを結ぶ直線が基準軸J0とされる。基準軸J0上において、点P0から長さL1の位置の点C1に、第1仮想面201が設定される。点P0から長さL2の位置の点C2に、第2仮想面202が設定される。ここでは、第1仮想面201及び第2仮想面202は、基準軸J0に対して垂直で、画面10に対しては非平行な平面として設定されている。 In the state shown in FIG. 6, a point P0 that is a user-side reference point is detected based on the camera-captured image, and a distance D1 between the point Q0 and the point P0 is measured. A straight line connecting the point P0 and the point Q0 is taken as a reference axis J0. On the reference axis J0, the first imaginary plane 201 is set at a point C1 that is located at a length L1 from the point P0. The second virtual surface 202 is set at a point C2 at a position of length L2 from the point P0. Here, the first virtual surface 201 and the second virtual surface 202 are set as planes that are perpendicular to the reference axis J0 and non-parallel to the screen 10.
 図6は視聴方向がX方向左右でのずれを持つ場合であるが、視聴方向がY方向上下でずれを持つ場合にも、同様に仮想面を設定可能である。なお、変形例として、図6のように視聴方向にずれを持つ場合に、点C1,C2の位置に、画面10に平行な仮想面を設定してもよい。 FIG. 6 shows a case where the viewing direction has a deviation in the left and right directions in the X direction, but a virtual plane can be similarly set when the viewing direction has a deviation in the upper and lower directions in the Y direction. As a modification, when there is a shift in the viewing direction as shown in FIG. 6, a virtual plane parallel to the screen 10 may be set at the positions of the points C1 and C2.
 [画面と仮想面の重なり]
 図7は、ユーザの視点から画面10を見た視界における、画面10と仮想面との重なりの様子を示す。このように、実施の形態1の表示システムでは、仮想面である第1仮想面201及び第2仮想面202は、X方向及びY方向に関しては、画面10の矩形に対して一対一でちょうど同じサイズで重なるように設定される。画面10に表示されるコンテンツやGUIのオブジェクト等の位置と、仮想面での手指の位置とが、一対一で対応した同じ位置となる。ユーザから見た手指の点F0は、そのまま、画面10のオブジェクト等を差す場合の位置と対応しており、GUI上に表示されるカーソルの位置も同じ位置となる。点F0の位置に対応付けられた画面10内の位置座標を(xf,yf)で示す。
[Overlap of screen and virtual surface]
FIG. 7 shows how the screen 10 and the virtual plane overlap in the field of view when the screen 10 is viewed from the user's viewpoint. As described above, in the display system according to the first embodiment, the first virtual surface 201 and the second virtual surface 202 which are virtual surfaces are exactly the same one-to-one with respect to the rectangle of the screen 10 in the X direction and the Y direction. Set to overlap in size. The position of the content or GUI object displayed on the screen 10 and the position of the finger on the virtual plane are the same one-to-one correspondence. The finger point F0 viewed from the user corresponds to the position when the object or the like on the screen 10 is inserted as it is, and the position of the cursor displayed on the GUI is also the same position. The position coordinates in the screen 10 associated with the position of the point F0 are indicated by (xf, yf).
 [両眼視差に基づいた距離計測]
 図8は、2つのカメラによる両眼視差に基づいた距離計測について補足するための説明図であり、空間100等を上から見たXZ平面を示す。図8では、画面10の中央の点Q0に対してユーザが真正面から見る場合であり、基準軸J0は、Z方向、画面10の法線に沿っている。図8では、右側のカメラ21の撮影画像、及び左側のカメラ22の撮影画像の例を概略的に示す。2つのカメラの被写体は、あるユーザAである。撮影画像内に、ユーザAの頭、顔、上半身等が含まれている。
[Distance measurement based on binocular parallax]
FIG. 8 is an explanatory diagram for supplementing distance measurement based on binocular parallax by two cameras, and shows an XZ plane when the space 100 and the like are viewed from above. In FIG. 8, the user sees the point Q0 at the center of the screen 10 from the front, and the reference axis J0 is in the Z direction and along the normal line of the screen 10. FIG. 8 schematically shows an example of a photographed image of the right camera 21 and a photographed image of the left camera 22. The subject of the two cameras is a user A. The captured image includes user A's head, face, upper body, and the like.
 表示システムは、公知の両眼視差に基づいた距離計測の原理を適用して、対象物の距離及び位置を検出する。両眼視差に基づいた距離計測の原理では、2つのカメラを両眼と見立てた場合に、撮影画像の両眼視差に基づいて被写体との奥行きの距離を計測できる。カメラ側基準点である点Q0、カメラ21の点Q5、カメラ22の点Q6の位置座標、及び長さH1等は、固定で既知の値である。ユーザ側基準点である点P0として両眼の中間点とする。点P0の輻輳角α1は、距離D1に応じた角度であり、点F0の輻輳角α2は、距離D2に応じた角度である。 The display system detects the distance and position of the object by applying the principle of distance measurement based on the known binocular parallax. In the principle of distance measurement based on binocular parallax, when two cameras are regarded as both eyes, the distance of the depth from the subject can be measured based on the binocular parallax of the captured image. The camera side reference point Q0, the point Q5 of the camera 21, the position coordinates of the point Q6 of the camera 22, the length H1, etc. are fixed and known values. A point P0 that is a user-side reference point is set as an intermediate point between both eyes. The convergence angle α1 at the point P0 is an angle according to the distance D1, and the convergence angle α2 at the point F0 is an angle according to the distance D2.
 表示装置1は、2つのカメラの撮影画像の解析において、両眼視差に基づいた距離計測の計算によって、点Q0から点P0までの距離D1や、点Q0から点F0までの距離D2を計測する。距離D1は、輻輳角α1(=θ1+θ2)と長さH1とを用いて計算で得られる。角度θ1,θ2は、左右の撮影画像の偏移から計算で得られる(図9)。 The display device 1 measures the distance D1 from the point Q0 to the point P0 and the distance D2 from the point Q0 to the point F0 by calculating the distance measurement based on the binocular parallax in the analysis of the captured images of the two cameras. . The distance D1 is obtained by calculation using the convergence angle α1 (= θ1 + θ2) and the length H1. The angles θ1 and θ2 are obtained by calculation from the deviation of the left and right captured images (FIG. 9).
 なお、図8では、2つのカメラの撮影の向きを、点P0を向く方向とし、角度θ1=θ2の場合を示しているが、実際の向きはこれに限らず可能である。2つのカメラの向きを平行な同じ向きとしてもよい。撮影範囲内にユーザが写っていればよい。 In FIG. 8, the shooting direction of the two cameras is the direction facing the point P0 and the angle θ1 = θ2 is shown, but the actual direction is not limited to this. The directions of the two cameras may be the same in parallel. It is only necessary that the user is in the shooting range.
 被写体を右側のカメラ21から撮影した画像801と、左側のカメラ22から撮影した画像802とでは、画像内容において角度θ1,θ2に応じた違いが生ずる。両眼視差計算部23は、画像内容における眼等の物体の位置を、図9のように、画像801,802を構成する画素を単位(画素位置とする)として計数する。両眼視差計算部23は、画像内容における対象物の画素位置の違いの幅を、差分δとして計算する。差分δと、カメラの焦点距離(FD1,FD2)とを用いて、距離D1,D2が計算できる。 The difference between the image 801 taken from the right camera 21 and the image 802 taken from the left camera 22 according to the angles θ1 and θ2 occurs in the image content. The binocular parallax calculation unit 23 counts the position of an object such as an eye in the image content in units of pixels constituting the images 801 and 802 (referred to as pixel positions) as shown in FIG. The binocular parallax calculation unit 23 calculates the difference width of the pixel position of the object in the image content as the difference δ. The distances D1 and D2 can be calculated using the difference δ and the focal lengths (FD1, FD2) of the camera.
 物体として例えば右眼の点E1に着目した場合に、右側の画像801の右眼の点E1の画素位置と、左側の画像802の右眼の点E1の画素位置との差分δ1を計算する。差分δ1が例えば約2cmであったとする。頭の直径811を約19cm、頭の周囲長812を約60cmとする。すると、角度θ1は、θ1=360°×(δ1/60)/2=6°となる。ユーザ側の点P0と、2つのカメラの点Q5,Q6とで構成される三角形において、距離D1は、D1=H1/tanθ1で得られる。例えば、H1=1mの場合、θ1=6°を代入すると、D1=9.5mとなる。距離D2についても同様に計測可能である。 For example, when focusing on the right eye point E1 as an object, the difference δ1 between the pixel position of the right eye point E1 of the right image 801 and the pixel position of the right eye point E1 of the left image 802 is calculated. Assume that the difference δ1 is about 2 cm, for example. The head diameter 811 is about 19 cm, and the head circumference 812 is about 60 cm. Then, the angle θ1 is θ1 = 360 ° × (δ1 / 60) / 2 = 6 °. In the triangle formed by the user-side point P0 and the two camera points Q5 and Q6, the distance D1 is obtained by D1 = H1 / tan θ1. For example, when H1 = 1 m, if θ1 = 6 ° is substituted, D1 = 9.5 m. The distance D2 can be similarly measured.
 画面10が大型である場合、長さH1が10mになる場合も想定される。カメラの画素数を例えば少なめに1200万画素とする。カメラの撮影範囲として、例えば部屋や会場の広さが30m×40mとする。その場合でも、カメラの1cmの解像度でカバーでき、実用上必要な仕様を満足する。 When the screen 10 is large, it is assumed that the length H1 is 10 m. For example, the number of pixels of the camera is set to 12 million pixels. As a shooting range of the camera, for example, the size of a room or a venue is 30 m × 40 m. Even in such a case, the camera can be covered with a resolution of 1 cm, which satisfies the specifications necessary for practical use.
 なお、表示システムで、遠隔操作制御の際に、いつも同様に上記のような計算を行うようにしてもよいが、ルックアップテーブル等を用いて計算を簡略化してもよい。予め、実験データをとり、入力値から計算して出力値を得て、入力値と出力値との対応関係をルックアップテーブルに格納しておく。 In the display system, the above-mentioned calculation may be performed in the same way at the time of remote operation control, but the calculation may be simplified using a lookup table or the like. In advance, experimental data is taken, an output value is obtained by calculation from the input value, and the correspondence between the input value and the output value is stored in a lookup table.
 [ユーザ側基準点の設定]
 図9は、カメラ撮影画像の解析に基づいた、ユーザ側基準点である点P0の設定について示す。図8及び図9では、2つのカメラの画像801,802に写っているユーザの顔部に、右眼の点E1、及び左眼の点E2が含まれている。撮影画像中の両眼の中間点について距離及び位置を算出し、その点を点P0として設定してもよい。あるいは、撮影画像中の右眼、左眼の各点について距離及び位置を算出してから、それらの点の中間点を点P0として設定してもよい。
[User-side reference point setting]
FIG. 9 shows the setting of the point P0, which is the user side reference point, based on the analysis of the camera photographed image. In FIG. 8 and FIG. 9, the right eye point E <b> 1 and the left eye point E <b> 2 are included in the face portions of the user shown in the images 801 and 802 of the two cameras. A distance and a position may be calculated for an intermediate point between both eyes in the captured image, and the point may be set as the point P0. Alternatively, after calculating the distance and the position for each point of the right eye and the left eye in the captured image, an intermediate point between these points may be set as the point P0.
 表示装置1の両眼視差計算部23は、例えば、右側のカメラ21の画像801内の例えば右眼の点E1の画素位置(x1,y1)と、左側のカメラ22の画像802内の右眼の点E1の画素位置(x2,y2)とを計算する。そして、それらの右眼の画素位置の差分δ1(言い換えると画素距離)を計算する。表示装置1は、同様に、左眼の点に関して差分δ2を計算する。表示装置1は、画素位置の差分δ1,δ2の大きさから、角度θ1,θ2を計算する。 For example, the binocular parallax calculation unit 23 of the display device 1 includes, for example, the pixel position (x1, y1) of the right eye point E1 in the image 801 of the right camera 21 and the right eye in the image 802 of the left camera 22. The pixel position (x2, y2) of the point E1 is calculated. Then, a difference δ1 (in other words, a pixel distance) between the pixel positions of the right eye is calculated. Similarly, the display device 1 calculates the difference δ2 with respect to the left eye point. The display device 1 calculates the angles θ1 and θ2 from the magnitudes of the pixel position differences δ1 and δ2.
 [画面表示制御例(1)]
 図10は、手指の位置に応じた画面10のGUIの表示制御例を示す。図10の上側は、図4と同様に、空間100等を横から見た状態で、手指の位置の例を示している。図10の下側は、手指の位置に応じた画面例として画面G1等を示している。画面G1等は、ユーザから見た画面10の表示例を示す。画面10のGUI例として、メニュー画面を示し、GUIのオブジェクトとして2つの選択肢ボタンを有する。ユーザは、ボタンのタッチ操作によって選択肢を選択する。GUI表示部14は、ボタンのタッチ操作に応じた対応処理として選択決定処理を行う。このタッチ操作は、第2仮想面202をタッチする操作である。詳しくは、このタッチ操作は、手指の点F0が、第2空間102内から第2仮想面202に接触して第3空間103内に少し進入する動きに相当する。
[Screen display control example (1)]
FIG. 10 shows a GUI display control example of the screen 10 according to the position of the finger. The upper side of FIG. 10 shows an example of the position of the finger in a state where the space 100 and the like are viewed from the side, as in FIG. The lower side of FIG. 10 shows a screen G1 and the like as a screen example according to the position of the finger. Screen G1 etc. show the example of a display of the screen 10 seen from the user. As an example of the GUI of the screen 10, a menu screen is shown, and two selection buttons are provided as GUI objects. The user selects an option by touching the button. The GUI display unit 14 performs a selection determination process as a corresponding process according to the button touch operation. This touch operation is an operation of touching the second virtual surface 202. Specifically, this touch operation corresponds to a movement in which the finger point F0 comes into contact with the second virtual surface 202 from the second space 102 and slightly enters the third space 103.
 最初、手指の位置が、空間100外にある場合、画面10には、表示機能のオン状態で、コンテンツ等が表示されている状態とする。ユーザは、指先を、空間100内、例えば第1空間101内に移動させる。 First, when the position of the finger is outside the space 100, the screen 10 is assumed to be in a state in which content or the like is displayed with the display function on. The user moves the fingertip into the space 100, for example, the first space 101.
 (1) 表示装置1は、手指の点F0の位置が、空間100外から、第1空間101内、例えば点F1の位置になった場合、第1画面例である画面G1を表示する。画面G1には、メニュー画面が表示されている。メニュー画面は、表示装置1のOSやアプリ等に動作指示を与えるためのGUIのオブジェクトが配置されている。GUIのオブジェクトは、選択肢ボタン、リストボックス、テキスト入力フォーム等、公知の各種のオブジェクトがある。本例では、画面G1には、オブジェクトとして、「選択肢A」または「選択肢B」から選択するための2つの選択肢ボタン401,402を有する。表示装置1は、この状態ではまだカーソルを表示しない。ユーザは、動作指示を与えたい場合、指先を、第1空間101内から第2空間102内に移動させる。なお、手指が第1空間101内から空間100外に戻った場合、メニュー画面は非表示状態にされる。 (1) The display device 1 displays a screen G1, which is a first screen example, when the position of the finger point F0 is the position of the point F1 in the first space 101 from the outside of the space 100, for example. A menu screen is displayed on the screen G1. On the menu screen, GUI objects for giving operation instructions to the OS and applications of the display device 1 are arranged. The GUI objects include various known objects such as option buttons, list boxes, and text input forms. In this example, the screen G1 has two option buttons 401 and 402 for selecting from “option A” or “option B” as objects. The display device 1 does not display the cursor yet in this state. The user moves the fingertip from the first space 101 to the second space 102 in order to give an operation instruction. When the finger returns from the first space 101 to the outside of the space 100, the menu screen is not displayed.
 (2) 手指の点F0の位置が、第1空間101内から第1仮想面201以降奥に進入して第2空間102内、例えば点F2の位置になった場合、第2画面例である画面G2の状態に変わる。表示装置1は、画面G2で、仮想面空間での手指の位置に合わせて、遠隔操作制御のためのカーソルK1を表示する。カーソルK1は、仮想面空間での手指の点F0の位置に対応付けられた画面10内の位置に表示される。カーソルK1の表示状態は、第2空間102内の手指の動きに応じて制御される。ユーザは、第2空間102内で手指を上下左右前後に自由に動かす。ユーザは、例えば選択肢ボタン402の「選択肢B」を選択したい場合、その意図を反映するために、カーソルK1を選択肢ボタン402の上に移動させるように手指を動かす。表示装置1は、そのオブジェクトの上の位置にカーソルK1を表示させる。 (2) When the position of the finger point F0 enters from the first space 101 into the back after the first virtual plane 201 and becomes the position of the second space 102, for example, the point F2, this is a second screen example. It changes to the state of the screen G2. The display device 1 displays a cursor K1 for remote operation control on the screen G2 in accordance with the position of the finger in the virtual plane space. The cursor K1 is displayed at a position in the screen 10 associated with the position of the finger point F0 in the virtual plane space. The display state of the cursor K1 is controlled according to the movement of fingers in the second space 102. The user freely moves his / her finger up / down / left / right / front / back in the second space 102. For example, when the user wants to select “option B” of the option button 402, the user moves his / her finger so as to move the cursor K1 onto the option button 402 in order to reflect the intention. The display device 1 displays the cursor K1 at a position above the object.
 表示装置1は、基準軸J0の方向における、距離DSTに応じて、カーソルK1を、所定の色やサイズや形状で表示するように制御する。例えば、表示装置1は、第2画面G2では、カーソルK1を、矢印形状、黄色で表示し、距離DSTが小さくなるほど小さいサイズで表示する。カーソルK1が小さいサイズになると、GUIのオブジェクトが見やすくなり、操作しやすくなる。ユーザは、選択肢ボタン402を押すように、手指を更に奥に動かす。なお、手指を第2空間102内から第1空間101内に戻すように動かすと、カーソルK1は非表示状態となる。また、手指を第2空間102内から空間100外に戻すように動かすと、メニュー画面及びカーソルK1は非表示状態となる。 The display device 1 controls the cursor K1 to be displayed in a predetermined color, size, or shape according to the distance DST in the direction of the reference axis J0. For example, on the second screen G2, the display device 1 displays the cursor K1 in an arrow shape and yellow, and displays a smaller size as the distance DST decreases. When the cursor K1 has a small size, it becomes easier to see and operate the GUI object. The user moves his / her finger further back so as to press the option button 402. When the finger is moved back from the second space 102 into the first space 101, the cursor K1 is not displayed. When the finger is moved from the second space 102 back to the outside of the space 100, the menu screen and the cursor K1 are not displayed.
 (3) 手指の点F0の位置が、第2仮想面202に到達し第2仮想面202以降奥に進入して第3空間103内、例えば点F3の位置になった場合、第3画面例である画面G3の状態に変わる。表示装置1は、手指の位置が第3空間103内にある画面G3の状態では、カーソルK1を表示したままとする。手指の点F0の位置が第2仮想面202にある場合、カーソルK1のサイズを最小にしてもよい。表示装置1は、指先が第2仮想面202に到達、通過する動きを、遠隔操作におけるタッチ操作として判定、検出する。表示装置1は、例えば点F2から点F3への動きを、選択肢ボタン402に対するタッチ操作として判定、検出する。表示装置1は、検出したタッチ操作を表す操作入力情報301を、GUI表示部14へ与える。GUI表示部14は、そのタッチ操作に応じて、選択肢ボタン402の対応処理として、「選択肢B」の選択決定処理を行い、また、画面G3でのオブジェクトやカーソルK1の表示状態を更新する。例えば、選択肢ボタン402の選択決定エフェクト表示が行われ、選択肢ボタン402が奥に押し込まれた表示状態にされる。また、カーソルK1の色が黄色から赤色に変化する。 (3) Third screen example when the position of the finger point F0 reaches the second virtual surface 202 and enters the second virtual surface 202 and beyond and reaches the inside of the third space 103, for example, the point F3 Changes to the state of the screen G3. The display device 1 keeps the cursor K1 displayed in the state of the screen G3 in which the position of the finger is in the third space 103. When the position of the finger point F0 is on the second virtual plane 202, the size of the cursor K1 may be minimized. The display device 1 determines and detects the movement of the fingertip reaching and passing through the second virtual surface 202 as a touch operation in the remote operation. For example, the display device 1 determines and detects a movement from the point F2 to the point F3 as a touch operation on the option button 402. The display device 1 gives operation input information 301 representing the detected touch operation to the GUI display unit 14. In response to the touch operation, the GUI display unit 14 performs a selection decision process of “option B” as a process corresponding to the option button 402, and updates the display state of the object and the cursor K1 on the screen G3. For example, the selection decision effect display of the option button 402 is performed, and the display is made such that the option button 402 is pushed inward. In addition, the color of the cursor K1 changes from yellow to red.
 他の画面表示制御例として以下としてもよい。第1空間101、第2空間102での表示制御は上記と同様とする。表示装置1は、手指の位置が、第3空間103内に進入した場合に、カーソルK1を非表示にする。例えば、第2空間102において、画面10のボタン等のオブジェクトが無い背景領域上に手指の位置がある場合に、その位置から第2仮想面202を経由して第3空間103内に進入する場合がある。その場合、表示装置1は、カーソルK1を非表示に切り替える。表示装置1は、手指の位置が第2空間102内に戻った場合にはカーソルK1を再び表示する。この制御の場合、ユーザは、カーソルK1が消えたことによって、状態のフィードバックとして、手指が遠隔操作に適切な仮想面空間内に無いことを認識でき、手指を手前に戻す。また、第3空間103でカーソル非表示にすることに限らず、不適切状態を表す特定のカーソル表示を行うようにしてもよい。 Other screen display control examples may be as follows. Display control in the first space 101 and the second space 102 is the same as described above. The display device 1 hides the cursor K1 when the position of the finger enters the third space 103. For example, in the second space 102, when the position of a finger is on a background area where there is no object such as a button on the screen 10, the user enters the third space 103 from the position via the second virtual plane 202. There is. In that case, the display device 1 switches the cursor K1 to non-display. The display device 1 displays the cursor K1 again when the position of the finger returns to the second space 102. In the case of this control, when the cursor K1 disappears, the user can recognize that the finger is not in the virtual plane space suitable for remote operation as the state feedback, and returns the finger to the front. In addition, the cursor is not hidden in the third space 103, and a specific cursor representing an inappropriate state may be displayed.
 上記のように、距離DSTに応じたカーソルK1の表示制御によって、ユーザに遠隔操作状態に関する視覚的なフィードバックを与える。これにより、ユーザは、仮想面に対する手指の進入度合いや位置関係、タッチ操作等の操作の状態を、よりわかりやすく認識でき、使い勝手が向上する。 As described above, visual feedback regarding the remote operation state is given to the user by the display control of the cursor K1 according to the distance DST. Accordingly, the user can more easily recognize the degree of finger approach to the virtual surface, the positional relationship, and the operation state such as a touch operation, thereby improving usability.
 他の画面表示制御例として以下としてもよい。最初、手指の位置が空間100外にある場合、表示装置1の表示機能のオフ状態(主電源がオフの状態)によって、画面10に何も表示されていない状態とする。ユーザは、手指を空間100内、例えば第1空間101内に進入させる。表示装置1は、カメラの撮影映像の解析に基づいて、手指が第1空間101内に進入する動きを検出する。表示装置1は、その検出を契機として、はじめて画面G1のメニュー画面を表示する。その後は同様とする。 Other screen display control examples may be as follows. Initially, when the position of the finger is outside the space 100, nothing is displayed on the screen 10 due to an off state of the display function of the display device 1 (main power is off). The user moves his / her finger into the space 100, for example, the first space 101. The display device 1 detects the movement of the finger entering the first space 101 based on the analysis of the video captured by the camera. The display device 1 displays the menu screen of the screen G1 for the first time triggered by the detection. The same shall apply thereafter.
 [画面表示制御例(2)]
 図11は、画面10のGUIのカーソル等の表示制御例を示す。(A1)は、仮想面空間において、手指の位置が第2仮想面202から相対的に遠い状態における画面例を示す。この画面10では、GUIのオブジェクトとしてアイコン411等が表示されている。表示装置1は、手指が仮想面空間内にある場合、距離DSTに応じて、矢印形状のカーソル410を、サイズを大小に変えながら表示する。この状態の画面10では、距離DSTが相対的に大きいので、カーソル410が相対的に大きいサイズで表示されている。この状態ではまだカーソル410が大きいので、アイコン411等のオブジェクトが選択しにくくなっている。(A2)は、(A1)の状態から手指が第2仮想面202に近付き、手指の位置が第2仮想面202に相対的に近い状態における画面例を示す。この状態の画面10では、カーソル410が相対的に小さいサイズで表示されている。この状態では、カーソル410が小さいので、アイコン411等が選択しやすくなっている。ユーザは、第2仮想面202への接近の度合いを、カーソル410のサイズ等から感知できる。
[Screen display control example (2)]
FIG. 11 shows a display control example such as a GUI cursor on the screen 10. (A1) shows a screen example in a state in which the position of the finger is relatively far from the second virtual surface 202 in the virtual surface space. In this screen 10, icons 411 and the like are displayed as GUI objects. When the finger is in the virtual plane space, the display device 1 displays the arrow-shaped cursor 410 with the size being changed according to the distance DST. On the screen 10 in this state, since the distance DST is relatively large, the cursor 410 is displayed in a relatively large size. In this state, since the cursor 410 is still large, it is difficult to select an object such as the icon 411. (A2) shows a screen example in a state where the finger approaches the second virtual surface 202 from the state of (A1) and the position of the finger is relatively close to the second virtual surface 202. On the screen 10 in this state, the cursor 410 is displayed in a relatively small size. In this state, since the cursor 410 is small, the icons 411 and the like are easy to select. The user can sense the degree of approach to the second virtual surface 202 from the size of the cursor 410 and the like.
 (B1)は、同様に、手指の位置が第2仮想面202から相対的に遠い状態における他の画面例を示す。この画面10では、二重円形状のカーソル420が表示されている。二重円の内側の円は一定であり、手指の点F0の位置と対応している。二重円の外側の円は、距離DSTに応じて半径が大小に変えられる。(B1)の状態では、外側の円の半径が相対的に大きいので、カーソル420が大きい。(B2)は、(B1)の状態から手指が第2仮想面202に近付いた状態における画面例を示す。(B2)の状態では、外側の円の半径が相対的に小さい。手指の点F0が第2仮想面202の位置Z2に到達した場合、外側の円の半径を内側の円の半径と同じにしてもよい。 (B1) similarly shows another screen example in a state in which the position of the finger is relatively far from the second virtual surface 202. On this screen 10, a double circular cursor 420 is displayed. The circle inside the double circle is constant and corresponds to the position of the finger point F0. The radius of the outer circle of the double circle is changed depending on the distance DST. In the state (B1), since the radius of the outer circle is relatively large, the cursor 420 is large. (B2) shows an example of a screen in a state in which a finger approaches the second virtual surface 202 from the state of (B1). In the state (B2), the radius of the outer circle is relatively small. When the finger point F0 reaches the position Z2 of the second virtual surface 202, the radius of the outer circle may be the same as the radius of the inner circle.
 (C1)は、同様に、手指の位置が第2仮想面202から相対的に遠い状態における他の画面例を示す。この画面10では、ハンド形状のカーソル430が表示されている。(C1)の状態では、特に第1タイプのハンド形状のカーソル430として手のひらを広げた形状の画像としている。(C2)は、(C1)の状態から手指が第2仮想面202に近付いた状態における画面例を示す。(C2)の状態では、距離DSTに応じてサイズと共にタイプも変更し、特に第2タイプとして一本指の形状の画像としている。このように、距離DSTに応じてカーソルのタイプ等を変えてもよい。カーソルは背景が見える透過表示としてもよい。 (C1) similarly shows another screen example in a state in which the position of the finger is relatively far from the second virtual surface 202. On this screen 10, a hand-shaped cursor 430 is displayed. In the state of (C1), an image having a shape in which the palm is widened is used as the first type hand-shaped cursor 430. (C2) shows an example of a screen in a state in which a finger approaches the second virtual surface 202 from the state of (C1). In the state of (C2), the type is changed together with the size according to the distance DST, and in particular, the second type is an image of the shape of one finger. In this way, the cursor type or the like may be changed according to the distance DST. The cursor may be a transparent display where the background can be seen.
 [画面表示制御例(3)]
 図12は、画面10のGUIのカーソル等の表示制御例、及び各種の操作例を示す。仮想面に対する手指の進入度合いをカーソル表示制御でフィードバックする場合に、手指が第2仮想面202にどのような状態で接触しているかをより詳しく表示する例を示す。
[Screen display control example (3)]
FIG. 12 shows display control examples such as a GUI cursor on the screen 10 and various operation examples. An example of displaying in more detail in what state the finger is in contact with the second virtual surface 202 when feedback of the degree of entry of the finger with respect to the virtual surface by cursor display control is shown.
 (A)は、一本指の先端が第2仮想面202に接触した状態と、その状態での画面例を示す。この画面10では、仮想面での一本指の先端の位置の点F0に対応させたGUI上の位置に、図11の(A2)と同様に矢印形状のカーソル410を表示している。特に、指先が第2仮想面202に接触した時に、カーソル410を特有の状態で表示してもよい。例えば接触を示すエフェクトを表示してもよい。この画面10でのGUIの操作例を下側に示す。操作例として、タッチ操作、タップ操作等がある。ユーザは、GUIのボタン等のオブジェクトに対し、カーソル410が重なる状態で、例えばタッチ操作を行う。即ち、ユーザは、一本指を基準軸J0上の奥へ動かして、指先が第2仮想面202に接触する状態にする。この動きがタッチ操作として判定される。第2仮想面202に対するタッチ操作によって対応処理が行われる。同様に、タップ操作等が可能である。 (A) shows a state in which the tip of one finger is in contact with the second virtual surface 202 and a screen example in that state. In this screen 10, an arrow-shaped cursor 410 is displayed at a position on the GUI corresponding to the point F0 of the position of the tip of one finger on the virtual plane, as in (A2) of FIG. In particular, when the fingertip touches the second virtual surface 202, the cursor 410 may be displayed in a unique state. For example, an effect indicating contact may be displayed. An example of GUI operation on this screen 10 is shown below. Examples of operations include touch operations and tap operations. The user performs, for example, a touch operation on an object such as a GUI button in a state where the cursor 410 overlaps. That is, the user moves one finger to the back on the reference axis J0 to bring the fingertip into contact with the second virtual surface 202. This movement is determined as a touch operation. Corresponding processing is performed by a touch operation on the second virtual surface 202. Similarly, a tap operation or the like is possible.
 (B)は、手のひらが第2仮想面202に接触した状態と、その状態での画面例を示す。この画面10では、仮想面での手のひらの位置及び断面に対応させて、GUI上の位置に、手指断面形状のカーソル440を表示している。この画面10でのGUIの操作例を下側に示す。操作例として、スワイプ操作等がある。ユーザは、画面10のページやオブジェクトに対し、例えばスワイプ操作を行う。即ち、ユーザは、手のひらを基準軸J0上で奥へ動かして第2仮想面202に接触する状態とし、その状態から手のひらをX方向やY方向で所望の方向にスライドさせるように動かす。この動きがスワイプ操作として判定される。同様に、フリック操作等が可能である。GUI表示部14は、それらの操作に応じた対応処理として、ページのスクロールやオブジェクトの移動の処理を行う。 (B) shows a state in which the palm is in contact with the second virtual surface 202 and a screen example in that state. On this screen 10, a cursor 440 having a finger cross-sectional shape is displayed at a position on the GUI in correspondence with the position and cross section of the palm on the virtual plane. An example of GUI operation on this screen 10 is shown below. Examples of operations include a swipe operation. The user performs, for example, a swipe operation on the page or object on the screen 10. That is, the user moves the palm back on the reference axis J0 to be in contact with the second virtual surface 202, and moves the palm so as to slide in a desired direction in the X direction or the Y direction. This movement is determined as a swipe operation. Similarly, a flick operation or the like is possible. The GUI display unit 14 performs page scrolling and object movement processing as corresponding processing in accordance with these operations.
 なお、カーソル440の手指断面形状は、概略表示でもよい。また、応用としては、手指断面の大きさを、操作の制御に用いてもよい。例えば、手指断面の面積の変化量が閾値以上である場合にタッチ操作と判定すること等が可能である。例えば、最初、第2仮想面202に一本指が接触した時点では一本指の小さい断面であり、更に奥に進入することで手全体に対応した大きい断面となる。 The finger cross-sectional shape of the cursor 440 may be a schematic display. Further, as an application, the size of the finger cross section may be used for operation control. For example, it is possible to determine that the touch operation is performed when the amount of change in the area of the finger cross section is equal to or greater than a threshold. For example, when one finger first comes into contact with the second virtual surface 202, it is a small cross section of one finger, and a large cross section corresponding to the entire hand by entering further into the back.
 (C)は、複数の指が第2仮想面202に接触した状態と、その状態での画面例を示す。この画面10では、仮想面での複数の指の位置及び断面に対応させて、GUI上の位置に、手指断面形状のカーソル450を表示している。例えば、三指が第2仮想面202に触れている状態である。この画面10でのGUIの操作例を下側に示す。操作例として、ピンチ操作等がある。ユーザは、画面10の画像等に対し、例えばピンチインやピンチアウトの操作を行う。ユーザは、例えば二本指を第2仮想面202に接触する状態とし、その状態から二本指をX方向やY方向で所望の方向に開閉させるように動かす。この動きがピンチ操作として判定される。GUI表示部14は、それらの操作に応じて、画像の拡大/縮小の処理を行う。 (C) shows a state in which a plurality of fingers are in contact with the second virtual surface 202 and a screen example in that state. In this screen 10, a finger cross-sectional cursor 450 is displayed at a position on the GUI in correspondence with the positions and cross sections of a plurality of fingers on the virtual plane. For example, the three fingers are in contact with the second virtual surface 202. An example of GUI operation on this screen 10 is shown below. Examples of operations include a pinch operation. The user performs, for example, a pinch-in or pinch-out operation on the image on the screen 10 or the like. For example, the user makes a state in which two fingers are in contact with the second virtual surface 202, and moves the two fingers so as to open and close in a desired direction in the X direction or the Y direction. This movement is determined as a pinch operation. The GUI display unit 14 performs image enlargement / reduction processing in accordance with these operations.
 また、応用としては、手指断面における指の数を、操作の制御に用いてもよい。例えば、手指断面から二本指と判定できる場合にはピンチ操作と判定すること等が可能である。 Also, as an application, the number of fingers in the finger section may be used for operation control. For example, when it can be determined that there are two fingers from the finger cross-section, it can be determined as a pinch operation.
 [空間の形状]
 図13は、実施の形態1の補足として、空間100の形状の詳細例について示す。遠隔操作制御に用いる空間100は、図13のように両眼の点E1,E2を2つの頂点とした概略四角錘形状の空間100としてもよい。この空間100内に収まるように、同様に仮想面空間が設定される。図1の空間100でも、図13の空間100でも、精度の点では殆ど問題無く使用可能である。空間100は、上記形状に限らず可能であり、例えば、顔や頭の付近に矩形をとって、その矩形を4つの頂点とする錐体としてもよい。
[Space shape]
FIG. 13 shows a detailed example of the shape of the space 100 as a supplement to the first embodiment. The space 100 used for the remote operation control may be a substantially quadrangular pyramid-shaped space 100 having points B1 and E2 of both eyes as two vertices as shown in FIG. Similarly, the virtual plane space is set so as to be within the space 100. Both the space 100 in FIG. 1 and the space 100 in FIG. 13 can be used with almost no problem in terms of accuracy. The space 100 is not limited to the above shape, and may be, for example, a cone having a rectangle near the face or head and having the rectangle as four vertices.
 [所定の操作]
 図14は、所定の操作の判定の詳細を説明するための説明図であり、空間100を上から見た状態のXZ平面を示す。手指の位置の点F0の例として、点f0~f5を示す。また、Z方向で、第2仮想面202の位置Z2から奥に所定の距離321の位置Zaに、操作制御用の面221が設けられている。第2仮想面202の位置Z2から奥に所定の距離322の位置Zbに、操作制御用の面222が設けられている。仮想面操作判定部24は、タッチ操作の判定の際に、第2仮想面202のみを用いてもよいが、第2仮想面202及び面221を用いてもよい。また、仮想面操作判定部24は、タッチ操作やタップ操作の判定の際に、第2仮想面202及び面222を用いてもよい。
[Predetermined operation]
FIG. 14 is an explanatory diagram for explaining details of determination of a predetermined operation, and shows an XZ plane in a state where the space 100 is viewed from above. As examples of the finger position point F0, points f0 to f5 are shown. In addition, an operation control surface 221 is provided at a position Za at a predetermined distance 321 from the position Z2 of the second virtual surface 202 in the Z direction. A surface 222 for operation control is provided at a position Zb at a predetermined distance 322 behind the position Z2 of the second virtual surface 202. The virtual surface operation determination unit 24 may use only the second virtual surface 202 or the second virtual surface 202 and the surface 221 when determining the touch operation. In addition, the virtual surface operation determination unit 24 may use the second virtual surface 202 and the surface 222 when determining a touch operation or a tap operation.
 なお、仮想面空間の考え方として、位置Zaの面221を第3仮想面、位置Zbの面222を第4仮想面といったように考えても構わない。仮想面空間は複数の仮想面の層から構成されると考えても構わない。仮想面空間は、既存のタッチパネルとの違いとして、空間100内の見えない仮想面に対して手指を前後に動かす操作を可能とする空間である。表示装置1は、その動きに伴ってリアルタイムで画面10のカーソル等の表示を制御することによって、ユーザに遠隔操作の状態をフィードバックする。 Note that, as a concept of the virtual plane space, the plane 221 at the position Za may be considered as the third virtual plane, and the plane 222 at the position Zb may be considered as the fourth virtual plane. The virtual surface space may be considered to be composed of a plurality of virtual surface layers. The virtual surface space is a space that enables an operation of moving a finger back and forth with respect to an invisible virtual surface in the space 100 as a difference from an existing touch panel. The display device 1 feeds back the remote operation state to the user by controlling the display of the cursor or the like on the screen 10 in real time according to the movement.
 図14を用いながら、表示システムにおいて、画面10のGUIに対する仮想面での遠隔操作として可能な所定の操作の例を以下に説明する。各種の所定の操作は、仮想面との位置関係及び進入度合いに応じた特有の制御が可能である。表示システムでは、手指の位置の点F0と、仮想面空間の各仮想面の位置との距離DSTに応じて、各種の操作を判定する。図14では、手指の点F0の位置が第2空間102内の点f2にある状態の距離を、距離DSTで示す。 Referring to FIG. 14, an example of a predetermined operation that can be performed as a remote operation on the virtual surface of the GUI of the screen 10 in the display system will be described below. Various predetermined operations can be controlled peculiar to the positional relationship with the virtual plane and the degree of entry. In the display system, various operations are determined according to the distance DST between the point F0 of the finger position and the position of each virtual surface in the virtual surface space. In FIG. 14, the distance in a state where the position of the finger point F0 is at the point f2 in the second space 102 is indicated by a distance DST.
 (1) 従来のタッチパネルのタッチ操作(長押し操作)やマウスのクリック(ホールド)操作について、本表示システムでは、特有のタッチ操作として、以下のように実現できる。このタッチ操作は、概略的には、手指を第2仮想面202に接触させる操作である。詳しくは、このタッチ操作は、手指の点F0を、位置Z2から位置Zaの面221までの範囲内の位置に維持する操作、あるいは、位置Z2から位置Zbの面222までの範囲内の位置に維持する操作である。 (1) Conventional touch panel touch operation (long press operation) and mouse click (hold) operation can be realized as the following specific touch operations in this display system. This touch operation is generally an operation of bringing a finger into contact with the second virtual surface 202. Specifically, this touch operation is performed by maintaining the finger point F0 at a position within the range from the position Z2 to the surface 221 of the position Za, or at a position within the range from the position Z2 to the surface 222 of the position Zb. It is an operation to maintain.
 表示装置1は、手指の点F0の位置が、第2空間102内の点f2から、第2仮想面202に接触して点f3にある場合、あるいは第3空間103内の距離321の面221までの範囲内にある場合、以下のように判定する。点f2から点f3への動きは、X方向及びY方向の位置がGUIのオブジェクト上の位置である場合には、そのオブジェクトに対するタッチ操作として対応付けることができる。距離DSTの条件で言えば、距離DSTの負の値が、0以上で距離321以下であることである。オブジェクトの種類に応じて、この条件を使用するか否かが設定できる。 When the position of the finger point F0 is at the point f3 in contact with the second virtual surface 202 from the point f2 in the second space 102, or the surface 221 of the distance 321 in the third space 103 is displayed. If it is within the range up to, it is determined as follows. The movement from the point f2 to the point f3 can be associated as a touch operation on the object when the positions in the X direction and the Y direction are positions on the GUI object. In terms of the condition of the distance DST, the negative value of the distance DST is 0 or more and 321 or less. Whether to use this condition can be set according to the type of object.
 また、表示装置1は、手指のX方向及びY方向の位置が、GUIにおける背景領域等、特にオブジェクトが無い領域である場合には、タッチ操作等に対応付けないようにする。また、その際、カーソル表示状態については、例えば、第2仮想面202に接触した時のカーソルの表示状態と同じ状態を維持させる。 In addition, the display device 1 does not associate with the touch operation or the like when the position of the finger in the X direction and the Y direction is a region such as a background region in the GUI, in particular, a region without an object. At that time, the cursor display state is maintained, for example, the same state as the cursor display state when the second virtual surface 202 is touched.
 更に、表示装置1は、手指の点F0の位置が、第2仮想面202から奥、例えば面221から面222までの範囲内に進入した場合、例えば点f4の位置になった場合、以下のように判定する。点f2から点f4への動きは、X方向及びY方向の位置がGUIのオブジェクト上の位置である場合には、そのオブジェクトに対するタッチ操作として対応付けることができる。距離DSTの条件で言えば、距離DSTの負の値が、距離321よりも大きく距離322以下であることである。同様に、この状態の場合、オブジェクトの種類に応じて、そのオブジェクトのタッチ操作として対応付けることができる。例えば、図10の選択肢ボタンの場合に、上記いずれかの条件を使用してタッチ操作を判定する。 Furthermore, when the position of the finger point F0 enters the back from the second virtual plane 202, for example, within the range from the plane 221 to the plane 222, for example, when the position of the point f4 is reached, Judge as follows. The movement from the point f2 to the point f4 can be associated as a touch operation on the object when the positions in the X direction and the Y direction are positions on the GUI object. In terms of the condition of the distance DST, the negative value of the distance DST is larger than the distance 321 and smaller than or equal to the distance 322. Similarly, in this state, it can be associated as a touch operation of the object according to the type of the object. For example, in the case of the option button in FIG. 10, the touch operation is determined using any of the above conditions.
 また、例えば、オブジェクトが音量調整ボタンである場合に、ユーザが手指を位置Zaと位置Zbとの範囲内で前後に動かすことによって、タッチ操作として判定し、距離DSTに応じて音量を調整するように対応付けることができる。 Also, for example, when the object is a volume adjustment button, the user moves the finger back and forth within the range between the position Za and the position Zb to determine that the operation is a touch operation and adjust the volume according to the distance DST. Can be associated.
 表示装置1は、タッチ操作の判定の際に時間をカウントしてもよい。表示装置1は、手指の点F0の位置が例えば位置Z2と位置Zaの範囲内にある状態が所定時間続いた場合に、長押し操作と判定してもよい。 The display device 1 may count time when determining the touch operation. The display device 1 may determine that the operation is a long press operation when a state where the position of the finger point F0 is within the range of the position Z2 and the position Za continues for a predetermined time, for example.
 カーソル表示制御例として以下としてもよい。一番奥の位置Zbの面222を第4仮想面として扱う。表示装置1は、距離DSTが、面222までの範囲内である場合には、画面10にカーソルを表示し、面222を越える場合には、カーソルを非表示にして、遠隔操作を無効にする。手指が面222よりも手前に戻った場合には、カーソルを表示状態に戻す。 Cursor control examples are as follows. The surface 222 at the innermost position Zb is treated as the fourth virtual surface. When the distance DST is within the range up to the surface 222, the display device 1 displays the cursor on the screen 10, and when it exceeds the surface 222, the display device 1 hides the cursor and invalidates the remote operation. . When the finger returns to the front of the surface 222, the cursor is returned to the display state.
 (2) 従来のタッチパネルのタップ操作や、マウスのクリック操作について、本表示システムでは、特有のタップ操作として、以下のように実現できる。このタップ操作は、概略的には、手指を、第2仮想面202を叩くように押してすぐに戻す操作である。手指の点F0の位置の遷移としては、第2空間102内の例えば点f2から、第2仮想面202を通過して距離321の範囲内の点f3(あるいは距離322の範囲内の点f4)になり、その点f3から短い時間内で第2仮想面202を通過して第2空間102内の位置に戻る。その場合、その動きを、タップ操作として対応付けることができる。 (2) With regard to conventional touch panel tap operations and mouse click operations, this display system can be implemented as the following specific tap operations. This tap operation is generally an operation of pushing a finger so as to strike the second virtual surface 202 and immediately returning it. As the transition of the position of the finger point F0, for example, from the point f2 in the second space 102, the point f3 passing through the second virtual plane 202 and within the range of the distance 321 (or the point f4 within the range of the distance 322). And returns to a position in the second space 102 through the second virtual plane 202 within a short time from the point f3. In that case, the movement can be associated as a tap operation.
 同様であるが、従来のダブルタップやダブルクリックの操作について、本表示システムでは、特有のダブルタップ操作等として実現できる。ダブルタップ操作は、概略的には、第2仮想面202に対するタップ操作を、短い時間内で2回繰り返す操作である。 Same as above, but the conventional double tap and double click operations can be realized as specific double tap operations in this display system. The double tap operation is generally an operation in which the tap operation on the second virtual surface 202 is repeated twice within a short time.
 (3) 従来のタッチパネルのスワイプ操作、マウスのドラッグ操作について、本表示システムでは、特有のスワイプ操作等として、以下のように実現できる。このスワイプ操作は、概略的には、手指の点F0を、第2仮想面202の距離321の範囲内(あるいは距離322の範囲内)においてタッチしたまま、X方向及びY方向の所望の方向にスライドさせる操作である。例えば、オブジェクトを移動させる場合、ユーザは、画面10のオブジェクト上の位置で手指を第2仮想面202以降奥に押し込み、X方向及びY方向で所望の位置に動かしてから、手指を手前の第2空間102内の位置に戻す。表示装置1は、そのスワイプ操作を判定、検出し、GUI表示部14は、そのスワイプ操作に合わせてオブジェクトを移動させる。ページのスクロール等の場合も同様である。 (3) The conventional touch panel swipe operation and mouse drag operation can be realized as the following specific swipe operations in this display system. In general, the swipe operation is performed in a desired direction in the X direction and the Y direction while touching the point F0 of the finger within the distance 321 (or within the distance 322) of the second virtual plane 202. This is a sliding operation. For example, when moving the object, the user pushes the finger at the position on the object on the screen 10 in the second virtual plane 202 and beyond, moves it to a desired position in the X direction and the Y direction, and then moves the finger to the first position in front. 2 Return to a position in the space 102. The display device 1 determines and detects the swipe operation, and the GUI display unit 14 moves the object in accordance with the swipe operation. The same applies to page scrolling and the like.
 (4) 従来のタッチパネルのフリック操作については、本表示システムでは、特有のフリック操作として、以下のように実現できる。このフリック操作は、概略的には、手指を、第2仮想面202以降奥の位置へ押してから、X方向及びY方向の所望の方向に素早く動かしつつ、第2空間102内の位置に戻す操作である。例えば、ページのスクロールの場合、GUI表示部14は、そのフリック操作の際の手指の動きの速さ、変化量に応じて、ページの移動量等を決定し、ページ表示状態を遷移させる。 (4) The conventional touch panel flick operation can be realized as follows as a specific flick operation in this display system. In general, this flick operation is an operation in which a finger is pushed to a position behind the second virtual plane 202 and then returned to a position in the second space 102 while quickly moving in a desired direction in the X direction and the Y direction. It is. For example, in the case of page scrolling, the GUI display unit 14 determines the amount of movement of the page or the like according to the speed of finger movement and the amount of change during the flick operation, and transitions the page display state.
 (5) 従来のタッチパネルのピンチ操作については、本表示システムでは、特有のピンチ操作として、以下のように実現できる。このピンチ操作は、概略的には、第2仮想面202以降奥の距離321(あるいは距離322)の範囲内の位置に手指の二点をおいたまま、X方向及びY方向の所望の方向でその二点を開閉させる操作である。ユーザは、例えば、二点を同時に第2仮想面202の距離321の範囲に進入させた状態で、二指を開いてピンチアウト、あるいは二指を閉じてピンチインの操作を行ってから、二指を第2空間102内に戻す。GUI表示部14は、ピンチアウト、ピンチインの操作を、画面10の画像の拡大、縮小の処理に対応付けることができる。 (5) The conventional pinch operation of the touch panel can be realized as follows as a specific pinch operation in this display system. In general, this pinch operation is performed in a desired direction in the X direction and the Y direction while keeping two points of fingers at a position within the range of the distance 321 (or distance 322) behind the second virtual plane 202. This is an operation to open and close the two points. The user, for example, opens two fingers to pinch out or closes two fingers and pinches in with two points entering the range of the distance 321 of the second virtual plane 202 at the same time. Is returned to the second space 102. The GUI display unit 14 can associate the pinch-out and pinch-in operations with the enlargement / reduction processing of the image on the screen 10.
 [仮想面調整]
 図15は、実施の形態1で、ユーザ個人毎の仮想面調整について示す。表示装置1は、機能の1つとして、ユーザ個人に適した仮想面空間を設定する機能を有する。この機能では、標準的な仮想面空間に基づいて、基準軸J0上で各仮想面の位置を前後にシフトするように調整可能であり、仮想面空間の厚さMも調整可能である。仮想面の位置のシフトに伴い、仮想面のサイズも、空間100の四角錘形状に収まるサイズに調整される。ユーザ個人の識別については、前述のように、個人認識部26を用いて可能である。
[Virtual surface adjustment]
FIG. 15 shows the virtual surface adjustment for each user in the first embodiment. The display device 1 has a function of setting a virtual plane space suitable for the individual user as one of the functions. In this function, it is possible to adjust the position of each virtual surface to shift back and forth on the reference axis J0 based on the standard virtual surface space, and the thickness M of the virtual surface space can also be adjusted. With the shift of the position of the virtual surface, the size of the virtual surface is also adjusted to a size that fits in the quadrangular pyramid shape of the space 100. As described above, the individual user can be identified using the individual recognition unit 26.
 図15の(A)は、まず、標準の仮想面における標準位置等を示す。この仮想面の情報は、デフォルト値として予め設定されている。標準位置として、所定長の長さL1,L2、及び厚さMを示す。仮想面は、XY平面で標準サイズを有し、画面10の比率に合わせたサイズである。ユーザが、新規のユーザであり、仮想面情報が未登録である場合、デフォルト値を用いて、標準的な仮想面が設定される。 (A) of FIG. 15 shows the standard position etc. in a standard virtual surface first. This virtual surface information is preset as a default value. As standard positions, lengths L1 and L2 having a predetermined length and a thickness M are shown. The virtual plane has a standard size on the XY plane and has a size that matches the ratio of the screen 10. If the user is a new user and the virtual surface information is not registered, a standard virtual surface is set using default values.
 図15の(B)は、(A)に対して、調整後の仮想面の状態の例を示す。例えば、ユーザが子供である場合に、大人用の標準的な仮想面のZ方向の位置よりも手前の位置になるように調整される。また、Z方向の位置の調整に伴い、仮想面のサイズも、空間100に合わせるように縮小される。前述のように、ユーザ設定として、仮想面調整部28を用いて、ユーザ個人毎の仮想面の調整が可能である。ユーザは、予め、個人認識のための顔画像や手指画像、またはユーザID等を登録すると共に、自分に適した仮想面の位置やサイズを設定可能である。 (B) of FIG. 15 shows an example of the state of the virtual surface after adjustment with respect to (A). For example, when the user is a child, the position is adjusted so as to be closer to the position in the Z direction of the standard virtual surface for adults. In addition, along with the adjustment of the position in the Z direction, the size of the virtual plane is also reduced to match the space 100. As described above, the virtual plane can be adjusted for each individual user using the virtual plane adjustment unit 28 as the user setting. The user can register a face image or finger image for personal recognition, a user ID, or the like in advance, and can set the position and size of the virtual plane suitable for him.
 (A)の第2仮想面202の位置Z2は、(B)では、基準軸J0上で、距離331分、点P0に近付けるように、位置Z2sにシフトされており、第2仮想面202の中心点C2は、中心点C2sになっている。所定長の長さL2は、長さL2sに変更されている。同様に、第1仮想面201の位置Z1は、基準軸J0上で、位置Z1sにシフトされており、第1仮想面201の中心点C1は、中心点C1sになっている。所定長の長さL1は、長さL1sに変更されている。位置のシフトに伴い、仮想面空間の厚さMは、厚さMsに変更されている。中心点C2sに、垂直に、空間100に収まるサイズで、第2仮想面202sが設定される。中心点C1sに、垂直に、空間100に収まるサイズで、第1仮想面201sが設定される。 The position Z2 of the second virtual surface 202 in (A) is shifted to the position Z2s so as to approach the point P0 by a distance of 331 on the reference axis J0 in (B). The center point C2 is the center point C2s. The predetermined length L2 is changed to the length L2s. Similarly, the position Z1 of the first virtual surface 201 is shifted to the position Z1s on the reference axis J0, and the center point C1 of the first virtual surface 201 is the center point C1s. The predetermined length L1 is changed to the length L1s. As the position is shifted, the thickness M of the virtual surface space is changed to the thickness Ms. A second virtual surface 202s is set at a size that fits in the space 100 perpendicular to the center point C2s. A first virtual surface 201s is set at a size that fits in the space 100 perpendicularly to the center point C1s.
 距離331は、調整用のオフセット距離であり、ユーザ設定可能である。なお、上記調整の際には、デフォルト値に対して調整用の係数を乗算する計算等を適用して、調整後の設定値を求めてもよい。仮想面を基準軸J0上で奥の方の位置にシフトする調整も同様に可能である。 The distance 331 is an offset distance for adjustment and can be set by the user. In the above adjustment, the adjusted set value may be obtained by applying a calculation for multiplying the default value by the adjustment coefficient. Adjustment for shifting the virtual plane to the position on the back on the reference axis J0 is also possible.
 変形例として、以下のような仮想面の調整を行ってもよい。表示装置1は、カメラ撮影画像の解析に基づいて、対象のユーザに関する、男/女、大人/子供といった、大まかな分類を判別する。表示装置1は、その分類に応じて、自動的に、予め設定情報として分類毎に設定されている位置やサイズの仮想面を適用する。 As a modification, the following virtual surface adjustment may be performed. The display device 1 determines a rough classification such as male / female or adult / child regarding the target user based on the analysis of the camera-captured image. The display device 1 automatically applies a virtual surface having a position and size set for each classification as setting information in advance according to the classification.
 仮想面空間については、ユーザ個人の腕の長さ等に応じて、基準軸J0上の位置を、選択、調整できることが好ましい。例えば、一般的に、大人は子供よりも、男性は女性よりも、腕の長さが長い。家族の各ユーザの個人毎に、適した位置に仮想面空間が設定されることが好ましい。本表示システムは、上記調整機能を用いて、ユーザ個人毎の好適な仮想面を設定できる。点P0と仮想面との距離等が適切な長さに設定される。これにより、ユーザ毎の使い勝手が向上する。なお、本表示システムでは、例えば大人と子供で同じ標準の仮想面空間を設定した場合でも、腕の長さ等の違いから使い勝手が若干異なることになるが、大きな不具合は生じない。標準の仮想面は、ユーザ群の平均値等の統計値を用いて設定されてもよい。 For the virtual plane space, it is preferable that the position on the reference axis J0 can be selected and adjusted according to the length of the arm of the individual user. For example, in general, adults have longer arms than children, and men have longer arms than women. It is preferable that the virtual plane space is set at a suitable position for each individual user of the family. This display system can set a suitable virtual plane for each individual user using the adjustment function. The distance between the point P0 and the virtual plane is set to an appropriate length. Thereby, the usability for each user is improved. In this display system, for example, even when an adult and a child have the same standard virtual plane space, the usability is slightly different due to the difference in arm length and the like, but no major problem occurs. The standard virtual surface may be set using a statistical value such as an average value of the user group.
 [効果等]
 上記のように、実施の形態1によれば、表示装置1の画面10に対するユーザの遠隔操作を制御することができ、必要な学習量が少なく、ユーザの使い勝手を良好にできる。実施の形態1によれば、ユーザは、遠隔操作の利用のために、複数のジェスチャを区別して覚える必要等が無く、学習量は最低限で済み、手指の簡単な動きによって遠隔操作が可能であり、使い勝手が良い。ユーザは、テレビ受像機等の画面10に対する操作入力を、リモコン機器を使用する必要無く、手ぶらの状態で、遠隔操作として容易に行うことができる。ユーザは、仮想面空間に対して従来のマウスやタッチパネルの操作に準じた操作を行えばよく、使い勝手が良い。実施の形態1によれば、ユーザの視線の前方の好適な位置に仮想面が構成され、手指の位置に応じてカーソル表示によるフィードバックが行われる。これにより、ユーザは、物が無い空間に対する遠隔操作を直感的に円滑に行うことができる。
[Effects]
As described above, according to the first embodiment, the user's remote operation on the screen 10 of the display device 1 can be controlled, and the required learning amount is small, and the user's usability can be improved. According to Embodiment 1, there is no need for the user to distinguish and memorize a plurality of gestures for the use of remote operation, the learning amount is minimal, and remote operation is possible by simple finger movement. Yes, it is easy to use. The user can easily perform an operation input to the screen 10 of the television receiver or the like as a remote operation in the state of being empty without using a remote control device. The user only needs to perform an operation according to the conventional mouse or touch panel operation on the virtual surface space, which is convenient. According to the first embodiment, a virtual plane is configured at a suitable position in front of the user's line of sight, and feedback by cursor display is performed according to the position of the finger. Thus, the user can intuitively and smoothly perform remote operation on a space where there is no object.
 従来技術例では、ユーザの視点と画面との間の奥行き方向における遠隔操作のしやすさ等が十分に考慮されていない。一方、実施の形態1では、その奥行き方向における特有のタッチ操作等を実現し、遠隔操作のしやすさ等が十分に考慮されている。 In the prior art example, the ease of remote operation in the depth direction between the user's viewpoint and the screen is not sufficiently considered. On the other hand, in the first embodiment, a unique touch operation or the like in the depth direction is realized, and the ease of remote operation or the like is sufficiently considered.
 また、実施の形態1によれば、カメラ撮影画像の解析に基づいて複数のジェスチャを区別して検出する処理も必要無いので、計算機の処理負荷についても比較的低くでき、実装上有利である。実施の形態1の表示装置1では、仮想面空間を設定し、仮想面空間での手指の進入度合いを監視して所定の操作を検出する処理を行う。この処理は、従来技術例における複数のジェスチャを区別して検出する処理に比べて、計算の負荷が低い。 Further, according to the first embodiment, since it is not necessary to distinguish and detect a plurality of gestures based on the analysis of the camera-captured image, the processing load on the computer can be relatively reduced, which is advantageous in implementation. In the display device 1 according to the first embodiment, a virtual surface space is set, and a process of detecting a predetermined operation is performed by monitoring the degree of finger entry in the virtual surface space. This processing is less computationally intensive than the processing for distinguishing and detecting a plurality of gestures in the prior art example.
 実施の形態1の変形例として以下が挙げられる。まず、2つのカメラが、対象物の距離計測を行う高度な機能を持つ1つのカメラとして統合されている形態でもよい。表示装置1のプロセッサは、そのカメラからのデータを用いて処理を行う。 The following are examples of modifications of the first embodiment. First, two cameras may be integrated as one camera having an advanced function of measuring the distance of an object. The processor of the display device 1 performs processing using data from the camera.
 [変形例(1)]
 図16は、実施の形態1の第1変形例の表示システムにおける、2つのカメラの配置、空間100等を示す。また、図16では、ユーザの視聴時の姿勢の例として、ユーザが床に横たわった姿勢で画面10を視聴しており、右手で仮想面を操作する場合を示す。このようにユーザの姿勢が変わり得るが、各姿勢で遠隔操作が可能である。
[Modification (1)]
FIG. 16 shows the arrangement of two cameras, the space 100, and the like in the display system of the first modification of the first embodiment. In addition, FIG. 16 illustrates a case where the user is viewing the screen 10 in a posture lying on the floor and operates the virtual surface with the right hand as an example of the posture when viewing the user. Thus, the user's posture can change, but remote control is possible in each posture.
 第1変形例では、カメラ21,22は、画面10に対し、右上の点Q1、左上の点Q2の位置に配置されている。この配置では、カメラ側基準点は、基本的に、2つのカメラの中間点、即ち、画面10の上辺の中間の点Q7となる。この配置でも、実施の形態1と同様に、仮想面を設定して制御可能である。第1変形例では、空間100の仮想面は、基本的には図17の(A)のようになり、補正によって図17の(B)のように設定される。 In the first modification, the cameras 21 and 22 are arranged at the position of the upper right point Q1 and the upper left point Q2 with respect to the screen 10. In this arrangement, the camera-side reference point is basically the middle point between the two cameras, that is, the middle point Q7 on the upper side of the screen 10. Even in this arrangement, the virtual plane can be set and controlled as in the first embodiment. In the first modification, the virtual surface of the space 100 is basically as shown in FIG. 17A, and is set as shown in FIG. 17B by correction.
 図17は、第1変形例における仮想面の補正について示す。図17の(A)は、補正前の状態を示す。カメラ撮影画像の解析によって、カメラ側基準点である点Q7と、ユーザ側基準点である点P0との距離D3が把握される。仮想面の設定に用いる基準軸J0は、点Q7と点P0とを結ぶ直線である。基準軸J0において、点P0から点Q7の方へ所定長の長さL1の位置の点C1aを中心点として、垂直な平面として第1仮想面201aが設定される。同様に、長さL2の位置の点C2aに第2仮想面202aが設定される。第2仮想面202aのサイズは、画面10のサイズに対応した所定の比率のサイズである。 FIG. 17 shows the correction of the virtual surface in the first modification. FIG. 17A shows a state before correction. By analyzing the camera-captured image, the distance D3 between the point Q7 that is the camera side reference point and the point P0 that is the user side reference point is grasped. The reference axis J0 used for setting the virtual plane is a straight line connecting the point Q7 and the point P0. On the reference axis J0, the first virtual plane 201a is set as a vertical plane with the point C1a at the position of the predetermined length L1 from the point P0 toward the point Q7. Similarly, the second virtual surface 202a is set at the point C2a at the position of the length L2. The size of the second virtual surface 202a is a size with a predetermined ratio corresponding to the size of the screen 10.
 なお、この補正前の第1仮想面201a、第2仮想面202aを、そのまま制御に使用してもよい。ユーザの視聴位置が画面10から十分に離れている場合、このような仮想面を使用しても、精度の点では殆ど問題無い。 Note that the first virtual surface 201a and the second virtual surface 202a before correction may be used as they are for control. When the viewing position of the user is sufficiently away from the screen 10, there is almost no problem in terms of accuracy even if such a virtual surface is used.
 図17の(B)は、(A)からの補正後の仮想面を示す。第1変形例では、第1仮想面201a、第2仮想面202aを補正して、補正後の第1仮想面201b、第2仮想面202bを得る。補正の方式は、いくつか可能であるが、角度回転を用いる場合には以下である。ユーザ側基準点である点P0を回転中心点として、基準軸J0及び仮想面を、画面10の中央の点Q0に合わせるように、角度γで回転させる。角度γは、点P0の位置、長さV1等を用いて計算で得られる。回転後の基準軸J0は、点Q0と点P0とを結ぶ直線である。回転後の仮想面が、第1仮想面201b、第2仮想面202bであり、それぞれの中心点は、点C1b,C2bになっている。 (B) in FIG. 17 shows a virtual surface after correction from (A). In the first modification, the first virtual surface 201a and the second virtual surface 202a are corrected to obtain the corrected first virtual surface 201b and second virtual surface 202b. Several correction methods are possible, but the following is required when angular rotation is used. Using the point P0 that is the user side reference point as the rotation center point, the reference axis J0 and the virtual plane are rotated at an angle γ so as to be aligned with the center point Q0 of the screen 10. The angle γ is obtained by calculation using the position of the point P0, the length V1, and the like. The reference axis J0 after rotation is a straight line connecting the point Q0 and the point P0. The rotated virtual surfaces are the first virtual surface 201b and the second virtual surface 202b, and the center points thereof are points C1b and C2b.
 他の補正の方式として以下でもよい。カメラ側基準点である点Q7からの点P0の位置及び距離D3と、長さV1とを用いて、計算により、画面10の中央の点Q0からの点P0の位置及び距離D1を得る。そして、点Q0からの点P0への基準軸J0上に、長さL1,L2を用いて、同様に、補正後の仮想面である第1仮想面201b、第2仮想面202bを設定してもよい。 The other correction methods may be as follows. The position and distance D1 of the point P0 from the center point Q0 of the screen 10 are obtained by calculation using the position and distance D3 of the point P0 from the point Q7 which is the camera side reference point and the length V1. Then, using the lengths L1 and L2 on the reference axis J0 from the point Q0 to the point P0, similarly, the first virtual surface 201b and the second virtual surface 202b, which are corrected virtual surfaces, are set. Also good.
 他の補正の方式としては、点Q7と点P0との間で先に第2仮想面のみを設定し、その第2仮想面を、点Q0と点P0との基準軸J0上の位置に補正する。そして、その補正後の第2仮想面の手前の所定の厚さの位置に第1仮想面を設定する。 As another correction method, only the second virtual plane is set first between the point Q7 and the point P0, and the second virtual plane is corrected to a position on the reference axis J0 between the point Q0 and the point P0. To do. Then, the first virtual surface is set at a position of a predetermined thickness before the corrected second virtual surface.
 2つのカメラの配置は、上記形態に限らず各種可能である。画面10の中央の点Q0に対して左右の任意の位置に2つのカメラを配置すればよい。例えば、右下の点Q3、左下の点Q4の位置としてもよい。また、3つ以上のカメラを配置して、両眼視差に基づいた距離計測の精度を高めるようにしてもよい。 The arrangement of the two cameras is not limited to the above form, and various arrangements are possible. What is necessary is just to arrange | position two cameras in arbitrary positions on either side with respect to the center point Q0 of the screen 10. FIG. For example, the positions of the lower right point Q3 and the lower left point Q4 may be used. Further, three or more cameras may be arranged to improve the accuracy of distance measurement based on binocular parallax.
 [変形例(2)]
 図18は、第2変形例を示す。第2変形例では、空間100内に単一の仮想面200を設定する。基準軸J0上、点P0から所定長の長さL0の位置Zcの点C0を中心に、空間100に収まるサイズで仮想面200が設定される。空間100は、単一の仮想面200によって、ユーザに近い側の第1空間111と、奥側の第2空間112とに分けられる。この仮想面200に対する手指の接触や押し込み等の操作が可能である。
[Modification (2)]
FIG. 18 shows a second modification. In the second modification, a single virtual surface 200 is set in the space 100. On the reference axis J0, the virtual plane 200 is set to a size that fits in the space 100 around the point C0 at the position Zc of a predetermined length L0 from the point P0. The space 100 is divided into a first space 111 on the side closer to the user and a second space 112 on the back side by a single virtual plane 200. Operations such as finger touching and pushing on the virtual surface 200 are possible.
 図18の下側には、手指の位置に応じた画面10の表示制御例を示す。手指の位置の点F0が、第1空間111内、例えば点Faにある場合に、例えば画面GAが表示される。画面GAでは、メニュー画面等でGUIのオブジェクトを有し、カーソルK1を表示させる。表示装置1は、実施の形態1と同様に、カーソルK1のサイズ等を、仮想面200に対する手指の位置関係を表す距離DSTに応じて変化させる。 18 shows a display control example of the screen 10 according to the finger position. When the point F0 of the finger position is in the first space 111, for example, at the point Fa, for example, the screen GA is displayed. The screen GA has a GUI object on the menu screen or the like and displays a cursor K1. As in the first embodiment, the display device 1 changes the size of the cursor K1 and the like according to the distance DST that represents the positional relationship of the fingers with respect to the virtual surface 200.
 手指の位置の点F0が、仮想面200に接触して例えば点Fbにある場合、あるいは仮想面200を通過して第2空間112内、例えば点Fcにある場合、例えば画面GBのように変わる。点Faから点Fbあるいは点Fcへの動きが、タッチ操作として判定、検出される。そのタッチ操作を表す操作入力情報301がGUI表示部14に与えられる。GUI表示部14は、そのタッチ操作に応じて、オブジェクトの対応処理を行い、オブジェクト及びカーソルK1の表示状態を更新する。同様に、仮想面200に対するタップ操作等が可能である。 When the point F0 of the finger position is in contact with the virtual plane 200, for example, at the point Fb, or passes through the virtual plane 200 and is in the second space 112, for example, at the point Fc, the position changes as shown in the screen GB, for example. . A movement from the point Fa to the point Fb or the point Fc is determined and detected as a touch operation. Operation input information 301 representing the touch operation is given to the GUI display unit 14. The GUI display unit 14 performs object correspondence processing in accordance with the touch operation, and updates the display state of the object and the cursor K1. Similarly, a tap operation or the like on the virtual surface 200 can be performed.
 また、画面10にGUIのオブジェクト等が表示されていない場合でも、仮想面200の全体に対するタッチ操作等が可能である。例えば、画面10に何も表示していない状態で、カメラ撮影に基づいて仮想面200を設定し、その仮想面200に対するタッチ操作を受け付ける。そのタッチ操作を、表示装置1に対する所定の動作指示、例えば電源オン(表示機能のオン状態への切り替え)等として関連付ける。その際には、カーソル表示制御のようなフィードバックを省略してもよい。 Further, even when a GUI object or the like is not displayed on the screen 10, a touch operation or the like on the entire virtual surface 200 can be performed. For example, in a state where nothing is displayed on the screen 10, the virtual surface 200 is set based on camera shooting, and a touch operation on the virtual surface 200 is accepted. The touch operation is associated as a predetermined operation instruction to the display device 1, for example, power on (switching the display function to an on state). In that case, feedback such as cursor display control may be omitted.
 更に、仮想面200のXY平面の全体を、複数の領域に分割してもよい。例えば、左右の領域に分割してもよい。そして、各領域に対するタッチ操作等を受け付ける。例えば、右領域のタップ操作の場合には第1動作指示、左領域のタップ操作の場合には第2動作指示、といったように関連付ける。 Furthermore, the entire XY plane of the virtual surface 200 may be divided into a plurality of regions. For example, it may be divided into left and right regions. And the touch operation etc. with respect to each area | region are received. For example, a first operation instruction is associated with a right region tap operation, and a second operation instruction is associated with a left region tap operation.
 [変形例(3)]
 図19は、第3変形例における、ユーザの視界における画面10と仮想面の重なりの状態を示す。第3変形例では、仮想面の設定及び調整として、画面10のXY平面の全体のうち一部の領域に限定して設定する。図19では、画面10のXY平面の全体のうち、右半分の領域191に対して、仮想面である第1仮想面201及び第2仮想面202が重なるように設定されている。この仮想面の領域191では、タッチ操作等の遠隔操作を有効として受け付け、また、カーソルK1も表示する。左半分の領域192については、遠隔操作を無効として受け付けず、カーソルK1も表示しない。また、更に、領域191を示す枠等を画面10に表示させるようにしてもよい。
[Modification (3)]
FIG. 19 shows a state where the screen 10 and the virtual surface overlap in the user's field of view in the third modification. In the third modified example, the setting and adjustment of the virtual plane are limited to a part of the entire XY plane of the screen 10. In FIG. 19, the first virtual plane 201 and the second virtual plane 202 which are virtual planes are set to overlap the right half area 191 in the entire XY plane of the screen 10. In this virtual surface area 191, remote operation such as touch operation is accepted as valid, and the cursor K1 is also displayed. In the left half area 192, remote operation is not accepted as invalid and the cursor K1 is not displayed. Further, a frame or the like indicating the area 191 may be displayed on the screen 10.
 例えば、GUI画面のうち一部領域のみを遠隔操作可能としたい場合に、このように仮想面を限定して設定することで、一部領域外に対する無用な操作を省略できる。 For example, when it is desired to remotely control only a partial area of the GUI screen, unnecessary operations outside the partial area can be omitted by setting the virtual plane in this way.
 (実施の形態2)
 図20を用いて、本発明の実施の形態2の遠隔操作制御装置を含む表示システムについて説明する。
(Embodiment 2)
A display system including the remote control device according to the second embodiment of the present invention will be described with reference to FIG.
 図20は、実施の形態2の表示システムの機能ブロック構成を示す。実施の形態2の表示システムは、表示装置1と遠隔操作制御装置3とが接続されるシステムである。表示装置1とは別の独立した装置である遠隔操作制御装置3に、実施の形態1の遠隔操作制御部20の機能を備えている。遠隔操作制御装置3は、表示装置1の画面10のGUI等に対するユーザの遠隔操作を制御する。遠隔操作制御装置3は、遠隔操作の操作入力情報301を生成して表示装置1に送信する。表示装置1は、その操作入力情報301に基づいて、実施の形態1と同様に画面10のGUI等を制御する。 FIG. 20 shows a functional block configuration of the display system of the second embodiment. The display system according to the second embodiment is a system in which the display device 1 and the remote control device 3 are connected. The remote operation control device 3, which is an independent device different from the display device 1, has the function of the remote operation control unit 20 of the first embodiment. The remote operation control device 3 controls a user's remote operation on the GUI or the like of the screen 10 of the display device 1. The remote operation control device 3 generates operation input information 301 for remote operation and transmits it to the display device 1. The display device 1 controls the GUI and the like of the screen 10 based on the operation input information 301 as in the first embodiment.
 表示装置1は、図2の構成要素と同様の制御部11等に加え、通信部16を有する。通信部16は、遠隔操作制御装置3の通信部33から操作入力情報301を受信し、GUI表示部14等に与える。 The display device 1 includes a communication unit 16 in addition to the control unit 11 and the like similar to the components in FIG. The communication unit 16 receives the operation input information 301 from the communication unit 33 of the remote operation control device 3 and gives it to the GUI display unit 14 and the like.
 遠隔操作制御装置3は、図2の遠隔操作制御部20に相当する同様の構成要素を備えると共に、制御部31、記憶部32、通信部33等を有する。制御部31は、遠隔操作制御装置3の全体を制御する。記憶部32は、制御用の情報やデータを記憶する。通信部33は、表示装置1の通信部16との間で通信処理を行う。通信部16及び通信部33は、所定の通信インタフェースに対応した通信インタフェース装置を含む部分である。通信部33は、操作入力情報出力部25から出力された操作入力情報301を、表示装置1の通信部16へ送信する。 The remote operation control device 3 includes similar components corresponding to the remote operation control unit 20 of FIG. 2 and includes a control unit 31, a storage unit 32, a communication unit 33, and the like. The control unit 31 controls the entire remote operation control device 3. The storage unit 32 stores control information and data. The communication unit 33 performs communication processing with the communication unit 16 of the display device 1. The communication unit 16 and the communication unit 33 are parts including a communication interface device corresponding to a predetermined communication interface. The communication unit 33 transmits the operation input information 301 output from the operation input information output unit 25 to the communication unit 16 of the display device 1.
 実施の形態2でも、実施の形態1と同様に遠隔操作を実現できる。実施の形態2では、表示装置1に遠隔操作制御機能を実装する必要が無く、既存の表示装置も利用可能である。遠隔操作制御装置3には、各種の表示装置1を必要に応じて接続して、表示システムを構成可能である。 In the second embodiment, remote control can be realized as in the first embodiment. In the second embodiment, it is not necessary to mount a remote control function on the display device 1, and an existing display device can be used. Various display devices 1 can be connected to the remote operation control device 3 as necessary to constitute a display system.
 (実施の形態3)
 図21~図24を用いて、本発明の実施の形態3の表示装置等について説明する。実施の形態3の基本的な構成は実施の形態1と同様であり、以下では、実施の形態3における実施の形態1とは異なる構成部分について説明する。実施の形態3の表示システムは、遠隔操作制御に関連して表示装置1の電源オン/オフ等の動作を制御して省電力化を図る機能を有している。
(Embodiment 3)
A display device and the like according to the third embodiment of the present invention will be described with reference to FIGS. The basic configuration of the third embodiment is the same as that of the first embodiment. In the following, the components of the third embodiment that are different from the first embodiment will be described. The display system according to the third embodiment has a function of saving power by controlling operations such as power on / off of the display device 1 in connection with remote operation control.
 [表示システム]
 図21は、実施の形態3の表示装置1を含む表示システムの構成を斜視で示す。表示装置1は、筐体内に、制御基板600、人感センサ601、主電源部602等を備えており、それらが通信線や電源線で接続されている。また、図21では、表示装置1の筐体内に2つのカメラを内蔵している場合を示し、カメラのレンズ部分が外に露出している。制御基板600には、図2の制御部11等が電子回路として実装されている。
[Display system]
FIG. 21 is a perspective view showing a configuration of a display system including the display device 1 according to the third embodiment. The display device 1 includes a control board 600, a human sensor 601 and a main power supply unit 602 in a housing, which are connected by a communication line or a power line. FIG. 21 shows a case where two cameras are built in the housing of the display device 1, and the lens portion of the camera is exposed to the outside. 2 is mounted on the control board 600 as an electronic circuit.
 主電源部602は、各部に電力を供給する。人感センサ601は、基本的に常時、電源オン状態にされる。2つのカメラ及び表示装置1の表示機能の部分は、通常時には電源オフ状態、言い換えると待機状態にされる。 The main power supply unit 602 supplies power to each unit. The human sensor 601 is basically always turned on. The two cameras and the display function portion of the display device 1 are normally in a power-off state, in other words, in a standby state.
 人感センサ601は、赤外線等を用いて、表示装置1の周囲等の所定の範囲における人の存在を検出する。人感センサ601によって人の存在を検出した場合、検出信号が制御基板600に送られる。表示装置1は、検出信号に応じて、2つのカメラの電源をオン状態にし、また、表示機能をオン状態にする。これにより、カメラは、撮影を開始し、表示装置1は、カメラ撮影画像を用いて処理を開始する。例えば、表示装置1は、撮影画像の顔画像の解析に基づいて、ユーザ個人認識処理を行う。表示装置1は、その際、カメラまたは他のセンサに基づいて部屋の明るさを判断してもよい。表示装置1は、部屋が暗く、個人認識処理のための明るさが不足していると判断した場合には、画面10の表示をオン状態にし、画面10に、高輝度映像、例えば高輝度の背景を持つメニュー画面を表示するように制御する。これにより、個人認識処理をサポートする。表示装置1は、ユーザ個人認識に基づいて、空間100に仮想面を設定し、遠隔操作が可能なモードに入る。 The human sensor 601 detects the presence of a person in a predetermined range such as around the display device 1 using infrared rays or the like. When the presence sensor 601 detects the presence of a person, a detection signal is sent to the control board 600. The display device 1 turns on the power of the two cameras and turns on the display function according to the detection signal. Thereby, the camera starts shooting, and the display device 1 starts processing using the camera shot image. For example, the display device 1 performs user personal recognition processing based on the analysis of the face image of the captured image. At that time, the display device 1 may determine the brightness of the room based on a camera or another sensor. When the display device 1 determines that the room is dark and the brightness for the personal recognition process is insufficient, the display device 1 turns on the display of the screen 10, and the screen 10 displays a high-intensity image, for example, a high-intensity image. Control to display a menu screen with a background. This supports personal recognition processing. The display device 1 sets a virtual plane in the space 100 based on user personal recognition, and enters a mode in which remote operation is possible.
 また、表示装置1は、以下のように、ユーザの遠隔操作によって表示装置1の電源オンを制御するようにしてもよい。ユーザは、画面10の前方の付近の位置で、画面10に対して手指をかざす。表示装置1は、カメラ撮影映像の解析に基づいて、空間100内に手指が進入したその動きを検出する。表示装置1は、その動きを、電源オンの動作指示として解釈する。これにより、主電源部602から表示機能を構成する各部へ電力が供給されて、表示機能がオン状態にされる。画面10には、コンテンツあるいはGUIのメニュー画面等が表示される。 Further, the display device 1 may control the power-on of the display device 1 by a user's remote operation as follows. The user holds his / her finger over the screen 10 at a position near the front of the screen 10. The display device 1 detects the movement of the finger that has entered the space 100 based on the analysis of the camera-captured video. The display device 1 interprets the movement as a power-on operation instruction. As a result, power is supplied from the main power supply unit 602 to each unit constituting the display function, and the display function is turned on. The screen 10 displays a content or a GUI menu screen.
 人感センサ601で人の存在が検出された場合でも、ユーザが空間100内に手指を進入させていない場合には、電源オフ状態が継続される。ユーザは、視聴以外の行動を続けることができる。また、表示装置1の本体の掃除や保守の他、長期に表示装置1が使用されていない場合には、主電源部602で人感センサ601を含む表示システムを完全に電源オフ状態に移行させる。 Even when the presence sensor 601 detects the presence of a person, the power-off state is continued if the user has not entered his / her finger into the space 100. The user can continue actions other than viewing. In addition to cleaning and maintenance of the main body of the display device 1, when the display device 1 has not been used for a long period of time, the main power supply unit 602 completely shifts the display system including the human sensor 601 to the power-off state. .
 人感センサ601は、表示装置1の周囲や部屋入口等の所定範囲に人が入ったことを検出する。人検出に応じてカメラがオン状態になると、カメラは、所定の範囲を撮影して撮影映像信号を出力する。表示装置1は、撮影画像の解析に基づいて、所定の範囲内にユーザがいるかを検出する。表示装置1は、所定の範囲内にユーザがいる場合、そのユーザが表示機能を利用する可能性が高いと判断し、表示機能をオン状態にする。即ち、主電源部602から制御基板600の各部への電力供給を開始させる。 The human sensor 601 detects that a person has entered a predetermined range such as around the display device 1 or a room entrance. When the camera is turned on in response to human detection, the camera captures a predetermined range and outputs a captured video signal. The display device 1 detects whether there is a user within a predetermined range based on the analysis of the captured image. When there is a user within a predetermined range, the display device 1 determines that the user is highly likely to use the display function, and turns on the display function. That is, power supply from the main power supply unit 602 to each unit of the control board 600 is started.
 あるいは、以下の手順としてもよい。カメラに基づいてユーザを検出した場合に、遠隔制御機能のみをオン状態にし、表示機能はオフ状態のままとする。画面10は非表示のままとする。表示装置1は、遠隔制御機能及びカメラ撮影映像を用いて、ユーザによる所定の動作を判定する。所定の動作は、表示機能をオン状態にするための予め規定された動作である。この所定の動作は、任意に規定できるが、例えば空間100の仮想面空間に手指を進入させる動作、あるいは第2仮想面202のタッチ操作等である。また、表示装置1は、仮想面空間内に手指が入っている状態が一定時間以上続いた場合に、所定の動作として検出してもよい。 Alternatively, the following procedure may be used. When a user is detected based on the camera, only the remote control function is turned on and the display function is kept off. The screen 10 is not displayed. The display device 1 determines a predetermined operation by the user using the remote control function and the camera-captured video. The predetermined operation is a predetermined operation for turning on the display function. Although this predetermined operation can be arbitrarily defined, it is, for example, an operation of bringing a finger into the virtual surface space of the space 100, a touch operation of the second virtual surface 202, or the like. In addition, the display device 1 may detect a predetermined operation when a finger is in the virtual plane space for a predetermined time or longer.
 [表示装置]
 図22は、実施の形態3の表示装置1の内部電子回路等のハードウェアを含む機能ブロック構成を示す。表示装置1は、第1アンテナ500、第2アンテナ501、チューナ回路502、復調回路503、映像音声データ信号分離回路504、データ伸張回路505、カメラ信号入力回路510、画像メモリ511、MPU(マイクロプロセッサユニット)520、不揮発性データメモリ521、ビデオ入力回路530、グラフィクス回路540、液晶駆動回路550、スイッチ560、表示パネルの画面10、カメラ21,22、人感センサ601、主電源部602、電源プラグ603等を有する。表示装置1には、外部PC700等が接続可能である。
[Display device]
FIG. 22 shows a functional block configuration including hardware such as an internal electronic circuit of the display device 1 according to the third embodiment. The display device 1 includes a first antenna 500, a second antenna 501, a tuner circuit 502, a demodulation circuit 503, a video / audio data signal separation circuit 504, a data expansion circuit 505, a camera signal input circuit 510, an image memory 511, an MPU (microprocessor). Unit) 520, nonvolatile data memory 521, video input circuit 530, graphics circuit 540, liquid crystal driving circuit 550, switch 560, display panel screen 10, cameras 21, 22, human sensor 601, main power supply unit 602, power plug 603 etc. An external PC 700 or the like can be connected to the display device 1.
 MPU520は、表示装置1の全体の制御処理を行う。MPU520は、前述の図2のような、個人認識、顔や手指の距離計測、仮想面の設定、仮想面での操作の判定、GUI表示制御、等の各種の処理をつかさどる。 The MPU 520 performs overall control processing of the display device 1. The MPU 520 handles various processes such as personal recognition, face and finger distance measurement, virtual surface setting, operation determination on the virtual surface, GUI display control, and the like as shown in FIG.
 不揮発性データメモリ521には、制御用のデータや情報が格納されている。不揮発性データメモリ521には、ユーザ個人認識用の登録顔画像データや、ユーザ個人毎の仮想面情報等が格納されている。 The non-volatile data memory 521 stores control data and information. The non-volatile data memory 521 stores registered face image data for individual user recognition, virtual surface information for each individual user, and the like.
 表示装置1内には、各部間を接続する信号線や電源線も有する。電源線651は、主電源部602から制御基板600内のMPU520等の電子回路や人感センサ601へ電力を供給するための電源線である。電源線652は、主電源部602からカメラ21,22へ電力を供給するための電源線である。信号線610~613等では、制御用の信号等が授受される。 The display device 1 also has a signal line and a power supply line that connect each part. The power supply line 651 is a power supply line for supplying power from the main power supply unit 602 to an electronic circuit such as the MPU 520 in the control board 600 and the human sensor 601. The power supply line 652 is a power supply line for supplying power from the main power supply unit 602 to the cameras 21 and 22. Control signals and the like are transmitted and received through the signal lines 610 to 613 and the like.
 表示装置1は、基本的な機能として、一般的なテレビ受像機と同様の表示機能を有する。第1アンテナ500は、地上波デジタル放送用テレビアンテナである。第2アンテナ501は、衛星放送用テレビアンテナである。表示装置1は、第1アンテナ500及び第2アンテナ501で受信されたテレビ信号をチューナ回路502で検波し、復調回路503で復調する。復調後の信号は、映像音声データ信号分離回路504で映像、音声、データのそれぞれの信号に分離される。特に、信号は膨大なデータ量を圧縮した形式で送られるので、信号を伸張して一定の同期信号のタイムスケールに合わせる必要がある。そのため、データ伸張回路505によってその伸張処理を行う。外部PC700等から送られてくるビデオ信号については、ビデオ入力回路530で適切なフォーマットに変換してグラフィクス回路540に伝達される。 The display device 1 has a display function similar to that of a general television receiver as a basic function. The first antenna 500 is a terrestrial digital broadcast television antenna. The second antenna 501 is a satellite TV antenna. The display device 1 detects a television signal received by the first antenna 500 and the second antenna 501 by the tuner circuit 502 and demodulates the signal by the demodulation circuit 503. The demodulated signal is separated into video, audio, and data signals by a video / audio data signal separation circuit 504. In particular, since the signal is sent in a format in which an enormous amount of data is compressed, it is necessary to expand the signal to match the time scale of a certain synchronization signal. Therefore, the data decompression circuit 505 performs the decompression process. A video signal sent from the external PC 700 or the like is converted into an appropriate format by the video input circuit 530 and transmitted to the graphics circuit 540.
 カメラ信号入力回路510は、カメラ21,22から得られる撮影映像の画像信号を入力する。カメラ信号入力回路510では、その撮影映像データの画像信号を、画像認識等がしやすい所定の形式の画像信号に変換して、画像メモリ511に並べて記憶する。画像メモリ511から信号線610を通じて撮影映像データの画像信号がMPU520へ供給される。 The camera signal input circuit 510 inputs an image signal of a captured video obtained from the cameras 21 and 22. In the camera signal input circuit 510, the image signal of the captured video data is converted into an image signal of a predetermined format that can be easily recognized and stored in the image memory 511 side by side. An image signal of the captured video data is supplied from the image memory 511 to the MPU 520 through the signal line 610.
 MPU520は、画像メモリ511の画像信号を用いて、前述の各種の処理を行う。MPU520は、例えば画像信号から、人の顔の特徴データ、例えば輪郭、眼、鼻、口といったパターンを抽出する。また、MPU520は、画像信号から、腕や手指の特徴データを抽出する。MPU520は、抽出した特徴データを用いて、個人認識処理や、両眼視差に基づいた距離計測処理等を行う。 The MPU 520 performs the various processes described above using the image signal in the image memory 511. The MPU 520 extracts feature data of a human face, for example, a pattern such as an outline, eyes, nose, and mouth from an image signal, for example. Further, the MPU 520 extracts feature data of arms and fingers from the image signal. The MPU 520 performs personal recognition processing, distance measurement processing based on binocular parallax, and the like using the extracted feature data.
 MPU520は、個人認識処理では、抽出した顔特徴データを、不揮発性データメモリ521に格納されている登録顔特徴データと比較照合して、ユーザ個人を識別する。MPU520は、対象者の顔特徴データに対して類似度が一定以上の登録顔特徴データが存在する場合には、対象者がその登録顔特徴データのユーザ個人に該当すると判定する。 In the personal recognition process, the MPU 520 compares the extracted facial feature data with the registered facial feature data stored in the nonvolatile data memory 521 to identify the individual user. The MPU 520 determines that the subject person corresponds to the individual user of the registered face feature data when there is registered face feature data having a certain degree of similarity or more with the face feature data of the subject person.
 MPU520は、識別した個人に対応する仮想面情報等を、不揮発性データメモリ521から読み出す。この仮想面情報は、前述の空間100にユーザ個人毎の仮想面空間を設定するための情報であり、前述の長さL1,L2等に関する設定値を含む。MPU520は、個人認識処理でユーザ個人が特定できない場合には、不揮発性データメモリ521から、標準の仮想面情報を読み出す。この情報は、長さL1,L2等に関するデフォルト値を含む。 The MPU 520 reads virtual surface information and the like corresponding to the identified individual from the nonvolatile data memory 521. The virtual surface information is information for setting a virtual surface space for each individual user in the space 100 described above, and includes setting values related to the lengths L1, L2, and the like. The MPU 520 reads standard virtual surface information from the nonvolatile data memory 521 when the individual user cannot be identified by the personal recognition process. This information includes default values for lengths L1, L2, etc.
 人感センサ601は、表示装置1の周囲の所定範囲における人の有無を検出する。人感センサ601は、所定範囲に人が入った場合、人検出を表す検出信号を、信号線612を通じてMPU520へ与える。MPU520は、人感センサ601から人検出を表す検出信号を入力した場合には、信号線613を通じてスイッチ560に制御信号を与え、スイッチ560をオフ状態からオン状態に切り替える。これにより、主電源部602から電源線651,652を通じて2つのカメラに電力が供給され、2つのカメラがオン状態になる。 The human sensor 601 detects the presence or absence of a person in a predetermined range around the display device 1. The human sensor 601 provides a detection signal indicating human detection to the MPU 520 through the signal line 612 when a person enters the predetermined range. When the MPU 520 receives a detection signal representing human detection from the human sensor 601, the MPU 520 gives a control signal to the switch 560 through the signal line 613, and switches the switch 560 from the off state to the on state. As a result, power is supplied from the main power supply unit 602 to the two cameras through the power supply lines 651 and 652, and the two cameras are turned on.
 MPU520は、画像メモリ511からの画像信号に基づいて、部屋の明るさが個人認識に必要な明るさに満たない場合、不揮発性データメモリ521から、高輝度画像信号を選択して読み出す。MPU520は、その高輝度画像信号を、信号線611を通じてグラフィクス回路540に送る。グラフィクス回路540は高輝度画像信号に基づいて液晶駆動回路550を制御し、液晶駆動回路550からの駆動によって画面10に高輝度の映像が表示される。高輝度画像信号は、背景を高輝度としたメニュー画面のデータとしてもよい。高輝度画像信号には、「室内の照明を明るくしてください」等のメッセージを含めてもよい。そのメッセージが画面10でスーパーインポーズされて表示される。 The MPU 520 selects and reads out the high-intensity image signal from the nonvolatile data memory 521 when the brightness of the room is less than the brightness necessary for personal recognition based on the image signal from the image memory 511. The MPU 520 sends the high luminance image signal to the graphics circuit 540 through the signal line 611. The graphics circuit 540 controls the liquid crystal driving circuit 550 based on the high luminance image signal, and a high luminance image is displayed on the screen 10 by driving from the liquid crystal driving circuit 550. The high luminance image signal may be data of a menu screen having a high luminance background. The high-intensity image signal may include a message such as “Please brighten the room lighting”. The message is superimposed on the screen 10 and displayed.
 人感センサ601によって、ユーザが表示装置1の周囲から離れて所定範囲から出たことを検出した場合、MPU520は、図示しない内部タイマ等によって時間の計測を開始する。MPU520は、所定時間が経過しても所定範囲内に再び人が入らなかった場合には、自動的に表示装置1の電源をオフ状態にする。即ち、表示装置1は、いわゆるオートシャットオフ機能を有する。 When the human sensor 601 detects that the user has left the display device 1 and moved out of the predetermined range, the MPU 520 starts measuring time using an internal timer (not shown). The MPU 520 automatically turns off the power of the display device 1 if a person does not enter the predetermined range again after a predetermined time has elapsed. That is, the display device 1 has a so-called auto shut-off function.
 [第1処理フロー]
 図23は、実施の形態3の表示装置1の第1処理フローを示す。図23の処理は主にMPU520によって行われる。図23は、ステップS1~S6を有する。以下、ステップの順に説明する。
[First processing flow]
FIG. 23 shows a first processing flow of the display device 1 according to the third embodiment. The processing in FIG. 23 is mainly performed by the MPU 520. FIG. 23 includes steps S1 to S6. Hereinafter, it demonstrates in order of a step.
 (S1) 図23の処理は、人感センサ601からの検出信号に基づいて起動される。検出信号に基づいて、カメラ21,22がオン状態にされ、撮像を開始する。 (S1) The processing in FIG. 23 is started based on a detection signal from the human sensor 601. Based on the detection signal, the cameras 21 and 22 are turned on to start imaging.
 (S2) 表示装置1は、カメラの撮影映像から、部屋の照明状態が、カメラがユーザの顔や頭を捉えるのに十分な明るさがあるかを判断する。明るさが十分である場合(Y)には、S4へ遷移する。 (S2) The display device 1 determines from the video captured by the camera whether the lighting state of the room is sufficiently bright for the camera to capture the user's face and head. If the brightness is sufficient (Y), the process proceeds to S4.
 (S3) 明るさが不十分である場合(N)には、S3で、表示装置1は、画面10に高輝度映像を表示する。あるいは、ユーザに部屋を明るくするようにメッセージ等を表示する。 (S3) If the brightness is insufficient (N), the display device 1 displays a high-luminance video on the screen 10 in S3. Alternatively, a message or the like is displayed for the user to brighten the room.
 (S4) 表示装置1は、カメラ撮像画像を用いて、遠隔操作制御処理を行う。この処理は、図24で第2処理フローとして示す。 (S4) The display device 1 performs a remote operation control process using the camera captured image. This process is shown as a second process flow in FIG.
 (S5) 表示装置1は、表示システムの遠隔操作制御を終了するかを判断する。例えば、ユーザが終了操作を入力した場合や、カメラの撮影映像からユーザの不在を検出した場合や、人感センサ601によって周囲の人の不在を検出した場合、表示装置1は、表示システムを終了させると判断し、S6へ遷移する。終了ではない場合(N)、S4の処理が繰り返される。 (S5) The display device 1 determines whether to end the remote operation control of the display system. For example, when the user inputs an end operation, when the absence of the user is detected from the captured video of the camera, or when the presence of a surrounding person is detected by the human sensor 601, the display device 1 ends the display system. It judges that it will make it, and it changes to S6. If not completed (N), the process of S4 is repeated.
 (S6) 表示装置1は、主電源部602からの表示機能及び2つのカメラへの電力供給をオフ状態にして、表示装置1を待機状態にする。終了後、S1から同様の繰り返しである。 (S6) The display device 1 turns off the display function from the main power supply unit 602 and the power supply to the two cameras, and puts the display device 1 into a standby state. After the end, the same is repeated from S1.
 [第2処理フロー]
 図24は、実施の形態3の表示装置1の第2処理フローを示し、主にMPU520による処理である。図24は、ステップS11~S22を有する。以下、ステップの順に説明する。
[Second processing flow]
FIG. 24 shows a second processing flow of the display device 1 according to the third embodiment, which is mainly processing by the MPU 520. FIG. 24 includes steps S11 to S22. Hereinafter, it demonstrates in order of a step.
 (S11) 表示装置1は、カメラ撮影映像から、ユーザの顔や手指を検出し、両眼視差に基づいた距離計測処理によって、点P0や点F0までの距離及び位置を検出する。 (S11) The display device 1 detects the user's face and fingers from the camera-captured video, and detects the distance and position to the point P0 and the point F0 by distance measurement processing based on binocular parallax.
 (S12) 表示装置1は、カメラ撮影映像から、顔画像における顔の特徴を抽出し、登録顔特徴データと比較照合して、ユーザ個人を識別する個人認識処理を行う。 (S12) The display device 1 extracts a facial feature in the facial image from the camera-captured video, compares it with the registered facial feature data, and performs personal recognition processing for identifying the individual user.
 (S13) ユーザ個人を識別できた場合(Y)にはS14へ進み、識別できない場合にはS15へ進む。 (S13) If the individual user can be identified (Y), the process proceeds to S14, and if the user cannot be identified, the process proceeds to S15.
 (S14) 表示装置1は、不揮発性データメモリ521から、ユーザ個人毎に設定されている仮想面情報を参照する。 (S14) The display device 1 refers to the virtual surface information set for each individual user from the nonvolatile data memory 521.
 (S15) 表示装置1は、不揮発性データメモリ521から、デフォルト値である標準の仮想面情報を参照する。ユーザ個人毎の仮想面情報が未登録である場合にも、標準の仮想面情報を参照する。仮想面情報には、長さL1,L2を含む。 (S15) The display device 1 refers to the standard virtual surface information, which is the default value, from the nonvolatile data memory 521. Even when the virtual surface information for each user is not registered, the standard virtual surface information is referred to. The virtual plane information includes lengths L1 and L2.
 (S16) 表示装置1は、仮想面情報を用いて、ユーザ側基準点である点P0から、点Q0へ向かう基準軸J0上で前方の所定長の長さL1,L2の位置に、仮想面である第1仮想面201及び第2仮想面202を設定する。 (S16) The display device 1 uses the virtual plane information to move the virtual plane from the point P0, which is the user side reference point, to the position of the predetermined lengths L1 and L2 on the reference axis J0 toward the point Q0. First virtual surface 201 and second virtual surface 202 are set.
 (S17) 表示装置1は、手指の点F0の位置と、第2仮想面202との距離DST等を計算する。表示装置1は、距離DSTを用いて、仮想面空間に対する手指の進入の度合い、深度を判定する。 (S17) The display device 1 calculates the distance DST between the position of the finger point F0 and the second virtual surface 202, and the like. The display device 1 uses the distance DST to determine the degree and depth of the finger entering the virtual surface space.
 (S18) 表示装置1は、手指が第1仮想面201以降奥の第2空間102内に進入したかどうか等を判断する。第2空間102内に進入した場合(Y)にはS19へ進み、そうでない場合(N)にはS20へ進む。 (S18) The display device 1 determines whether or not the finger has entered the second space 102 behind the first virtual plane 201. If it has entered the second space 102 (Y), the process proceeds to S19, and if not (N), the process proceeds to S20.
 (S19) 表示装置1は、手指の位置に対応させて、画面10のメニュー画面内に、距離DSTに応じた状態でカーソルを表示する。その際、表示装置1は、カーソル表示制御情報を含む操作入力情報301をGUI表示部14に与える。 (S19) The display device 1 displays the cursor in a state corresponding to the distance DST in the menu screen of the screen 10 in correspondence with the position of the finger. At that time, the display device 1 gives operation input information 301 including cursor display control information to the GUI display unit 14.
 (S20) 表示装置1は、手指が第2仮想面202以降奥の第3空間103内に進入したかどうか等を判断する。第3空間103内に進入した場合(Y)にはS21へ進み、そうでない場合(N)には終了する。 (S20) The display device 1 determines whether or not the finger has entered the third space 103 behind the second virtual plane 202. If it has entered the third space 103 (Y), the process proceeds to S21, and if not (N), the process ends.
 (S21) 表示装置1は、前述の図14のように、第2仮想面202に対するタッチ操作等の所定の操作を判定する。表示装置1は、所定の操作が有りと判定した場合には、その所定の操作を表す操作情報を含む操作入力情報301をGUI表示部14に与える。 (S21) The display device 1 determines a predetermined operation such as a touch operation on the second virtual surface 202 as shown in FIG. When the display device 1 determines that there is a predetermined operation, the display device 1 gives operation input information 301 including operation information representing the predetermined operation to the GUI display unit 14.
 (S22) 表示装置1は、GUI表示部14により、GUIのオブジェクトに対する所定の操作に応じた対応処理を行わせる。また、対応処理に応じて、表示装置1の規定の動作が実行される。なお、実施の形態3では、GUI表示部14はMPU520によって実現されており、MPU520は、操作入力情報301を生成して自身でGUI表示制御処理を行う。 (S22) The display device 1 causes the GUI display unit 14 to perform corresponding processing according to a predetermined operation on the GUI object. Further, a prescribed operation of the display device 1 is executed according to the handling process. In the third embodiment, the GUI display unit 14 is realized by the MPU 520, and the MPU 520 generates the operation input information 301 and performs the GUI display control process by itself.
 所定の操作に応じた表示装置1の動作の例としては、一般的な、映像、音声、デジタル入力等の切り替えの動作の他に、既存のリモコン機器の操作メニューで操作可能である各種の動作が可能である。本表示システムの遠隔操作の利用によって、既存のリモコン機器を使わずに動作指示が可能である。 Examples of the operation of the display device 1 according to a predetermined operation include various operations that can be operated from an operation menu of an existing remote control device in addition to a general switching operation of video, audio, digital input, and the like. Is possible. By using the remote control of this display system, it is possible to give an operation instruction without using an existing remote control device.
 上記のように、実施の形態3によれば、リモコン機器を使用せずに遠隔操作を実現すると共に、省電力化も実現できる。 As described above, according to the third embodiment, remote control can be realized without using a remote control device, and power saving can also be realized.
 (実施の形態4)
 図25~図26を用いて、本発明の実施の形態4の表示装置等について説明する。実施の形態4では、表示装置1として、プロジェクタ(投射型表示装置)を適用する。プロジェクタは、デジタル入力データ等に基づいてスクリーン250の画面10に映像を投射表示する機能を有する。
(Embodiment 4)
A display device and the like according to the fourth embodiment of the present invention will be described with reference to FIGS. In the fourth embodiment, a projector (projection display device) is applied as the display device 1. The projector has a function of projecting and displaying an image on the screen 10 of the screen 250 based on digital input data or the like.
 図25の表示システムでは、表示装置1は、取り付け器具251によって天井に設置されており、その前方のスクリーン250に対して投射表示を行う。また、表示装置1の筐体に2つのカメラが設けられている。筐体の左右の点Qa,Qbに、カメラ21,22が配置されている。2つのカメラの向きは、天井から下方、スクリーン250から前方の付近を撮影範囲とするように設定されており、向きが調整可能である。カメラ21,22から撮影された画像は、ユーザに対してやや上方からの画像となる。 In the display system of FIG. 25, the display device 1 is installed on the ceiling by the mounting tool 251 and performs projection display on the screen 250 in front of it. In addition, two cameras are provided in the housing of the display device 1. Cameras 21 and 22 are arranged at the left and right points Qa and Qb of the housing. The orientations of the two cameras are set so that the photographing range is the lower part from the ceiling and the front part from the screen 250, and the orientations can be adjusted. Images taken from the cameras 21 and 22 are images from slightly above the user.
 図25の場面では、スクリーン250の画面10の投射映像を視聴する複数のユーザとして、ユーザA,B,Cがいる場合を示す。ユーザAが、実施の形態1と同様に、空間100の仮想面空間に対して遠隔操作を行う状態を示す。ユーザAの両脇にいるユーザB,Cが、同じ画面10の投射映像を視聴している。ユーザBは前髪が長く、ユーザCはつば付きの帽子をかぶっている。 25 shows a case where there are users A, B, and C as a plurality of users who view the projected video on the screen 10 of the screen 250. A state in which user A performs a remote operation on the virtual surface space of the space 100 as in the first embodiment is shown. Users B and C on both sides of the user A are watching the projected video on the same screen 10. User B has long bangs and user C is wearing a hat with a brim.
 図26は、図25の場面に対応した、天井の表示装置1の2つのカメラから下方の空間100等を見た撮影画像に対応したXZ平面を示す。実施の形態1と同様に、カメラ撮影画像の解析における、両眼視差に基づいた距離計測によって、各ユーザのユーザ側基準点や手指の位置が検出可能である。図26では、ユーザAの空間100aに仮想面空間102aが設定されている。同様に、ユーザBの空間100bに仮想面空間102bが設定されている。ユーザCの空間100cに仮想面空間102cが設定されている。 FIG. 26 shows an XZ plane corresponding to a photographed image obtained by viewing the space 100 below from the two cameras of the ceiling display device 1 corresponding to the scene of FIG. Similar to the first embodiment, it is possible to detect the user-side reference point and finger position of each user by distance measurement based on binocular parallax in the analysis of camera-captured images. In FIG. 26, the virtual surface space 102a is set in the space 100a of the user A. Similarly, a virtual plane space 102b is set in the user B space 100b. A virtual surface space 102c is set in the space 100c of the user C.
 実施の形態4では、天井付近のカメラがユーザの頭に対して上方の位置にあるが、両眼視差に基づいた距離計測の原理は同様に適用可能である。しかしながら、実施の形態4では、カメラ撮影画像において、ユーザの顔及び両眼等は写りにくくなっている。図26のような撮影画像では、ユーザBの顔の両眼が前髪によって隠れている。ユーザCの顔は帽子によって隠れている。両眼が写っている場合には、実施の形態1と同様にユーザ側基準点が設定可能である。両眼が写っていない場合でも、所定の方式でユーザ側基準点が設定可能である。例えば、顔の肌色の部分を検出できる場合に、その部分にユーザ側基準点を設定してもよい。また、顔が検出しにくい場合でも、頭や髪や帽子等の部分を検出し、その部分に対して例えばZ方向の先端の位置に、ユーザ側基準点を設定してもよい。実施の形態4では、カメラ撮影映像の解析の際に、顔の輪郭や両眼よりも、むしろ、上方から見た頭や体の特徴を抽出し、例えば頭部の先端の一点を、点P0として設定する。 In Embodiment 4, the camera near the ceiling is located above the user's head, but the principle of distance measurement based on binocular parallax can be applied in the same way. However, in the fourth embodiment, the user's face, both eyes, and the like are hardly captured in the camera-captured image. In the photographed image as shown in FIG. 26, both eyes of the face of the user B are hidden by the bangs. User C's face is hidden by a hat. When both eyes are shown, a user-side reference point can be set as in the first embodiment. Even when both eyes are not captured, the user-side reference point can be set by a predetermined method. For example, when a skin color portion of the face can be detected, a user-side reference point may be set for the portion. Further, even when it is difficult to detect a face, a part such as a head, hair, or hat may be detected, and a user-side reference point may be set, for example, at the tip position in the Z direction with respect to that part. In the fourth embodiment, when analyzing the camera-captured video, rather than the face contour and both eyes, the characteristics of the head and body viewed from above are extracted. For example, one point of the tip of the head is point P0. Set as.
 表示装置1は、天井付近のカメラ側基準点と画面10の中央の点Q0との位置関係も予め把握している。点Q0と各ユーザのユーザ側基準点である点P0との間に基準軸J0を設定し、基準軸J0上に仮想面を設定可能である。実施の形態4でも、各ユーザの個人の認識に応じて、個人毎の登録データに基づいて、個人の腕の長さ等に適した仮想面を設定可能である。 The display device 1 also knows in advance the positional relationship between the camera-side reference point near the ceiling and the center point Q0 of the screen 10. A reference axis J0 can be set between the point Q0 and the point P0, which is the user-side reference point of each user, and a virtual plane can be set on the reference axis J0. Also in the fourth embodiment, it is possible to set a virtual plane suitable for the length of an individual arm or the like based on registration data for each individual in accordance with the individual recognition of each user.
 なお、実施の形態4では、カメラ撮影映像で、ユーザが腕をZ方向の画面10の方に前後に伸縮する状態については判別しやすい。表示装置1は、ユーザの腕の伸縮の判別に応じて遠隔操作の制御処理を行ってもよい。 In the fourth embodiment, it is easy to determine the state in which the user extends and retracts his / her arm back and forth in the Z-direction screen 10 in the camera-captured video. The display device 1 may perform a remote operation control process in accordance with the determination of the extension / contraction of the user's arm.
 本表示システムを会議等で利用する場合の使い方としては、従来のレーザーポインタ等の代用として、手指の点F0の位置に応じたカーソルを画面10上の点Kxに投射表示することができる。ユーザは、仮想面空間での遠隔操作に伴い、画面10上の指し示したい部分をカーソルによって指し示すことができる。現在投影しているスライド資料の指定等も、所定のGUIに対する遠隔操作として実現できる。 As a method of using this display system in a meeting or the like, a cursor corresponding to the position of the finger point F0 can be projected and displayed on the point Kx on the screen 10 as a substitute for a conventional laser pointer or the like. The user can point a portion to be pointed on the screen 10 with the cursor in accordance with the remote operation in the virtual plane space. Designation of the slide material currently projected, etc. can also be realized as a remote operation for a predetermined GUI.
 従来技術では、カメラ撮影映像に、視聴者と視聴者以外の人を含む複数の人が写っている場合に、視聴者以外の人の動きが外乱として作用し、視聴者のジェスチャを検出しにくく、視聴者以外の人の動きから誤操作につながる可能性がある。視聴者以外の人は、例えば同じ部屋で遊んでいる子供、通りすがりの人、画面を注視していない人や操作を意識していない人を含む。実施の形態4によれば、複数の人が存在する環境でも、誤操作が少なく好適な遠隔操作が可能である。 In the conventional technology, when multiple people including viewers and people other than the viewer are shown in the camera video, the movement of people other than the viewer acts as a disturbance, making it difficult to detect the viewer's gesture. , There is a possibility of misoperation due to the movement of people other than the viewer. The person other than the viewer includes, for example, a child playing in the same room, a person passing by, a person who is not watching the screen, and a person who is not aware of the operation. According to the fourth embodiment, even in an environment where there are a plurality of people, it is possible to perform a suitable remote operation with few erroneous operations.
 (実施の形態5)
 図27を用いて、本発明の実施の形態5の表示装置等について説明する。実施の形態5では、表示装置1であるプロジェクタが、スクリーン250の前方のテーブルの上に設置されている。表示装置1の筐体の左右の位置である点Qa,Qbに、2つのカメラであるカメラ21,22が配置されている。表示装置1は、前方のスクリーン250の画面10に映像を投射表示する。カメラ21,22は、画面10から前方のユーザを含む所定範囲を撮影する。このように、実施の形態5でも、実施の形態1等と同様に遠隔操作が実現できる。
(Embodiment 5)
A display device and the like according to the fifth embodiment of the present invention will be described with reference to FIG. In the fifth embodiment, the projector that is the display device 1 is installed on a table in front of the screen 250. Two cameras 21 and 22 are arranged at points Qa and Qb which are positions on the left and right sides of the housing of the display device 1. The display device 1 projects and displays an image on the screen 10 of the front screen 250. The cameras 21 and 22 capture a predetermined range including the user ahead from the screen 10. As described above, in the fifth embodiment, the remote operation can be realized as in the first embodiment.
 (実施の形態6)
 図28を用いて、本発明の実施の形態6の表示装置等について説明する。実施の形態6では、実施の形態5と同様に、表示装置1であるプロジェクタが、スクリーン250の前方のテーブルの上に配置されている。2つのカメラは、スクリーン250の例えば右上、左上の位置に配置されている。右上の点Qaにカメラ21が、左上の点Qbにカメラ22が取り付けられている。表示装置1の筐体と、2つのカメラとは、配線281,282によって接続されている。配線281,282は、信号線や電源線を含む有線ケーブルである。カメラの撮影映像の信号は、配線281,282を通じて表示装置1へ伝送される。
(Embodiment 6)
A display device and the like according to the sixth embodiment of the present invention will be described with reference to FIG. In the sixth embodiment, as in the fifth embodiment, the projector that is the display device 1 is arranged on a table in front of the screen 250. The two cameras are arranged on the screen 250 at, for example, the upper right and upper left positions. A camera 21 is attached to the upper right point Qa, and a camera 22 is attached to the upper left point Qb. The housing of the display device 1 and the two cameras are connected by wirings 281 and 282. The wirings 281 and 282 are wired cables including signal lines and power supply lines. A camera image signal is transmitted to the display device 1 through wirings 281 and 282.
 このように、実施の形態6でも、実施の形態1等と同様に遠隔操作が実現できる。2つのカメラの配置位置は調整可能である。特に、2つのカメラの間の距離をより大きくとることもでき、撮影範囲を広くすることができる。無線通信機能を持つカメラ21,22を用いる場合には、配線281,282を省略可能である。 Thus, in the sixth embodiment, remote control can be realized as in the first embodiment. The arrangement positions of the two cameras can be adjusted. In particular, the distance between the two cameras can be increased, and the shooting range can be increased. When the cameras 21 and 22 having the wireless communication function are used, the wirings 281 and 282 can be omitted.
 (実施の形態7)
 図29を用いて、本発明の実施の形態7の表示装置等について説明する。実施の形態7の表示装置1は、基本的な構成が実施の形態1と同様であり、加えて、複数のユーザで同時的に遠隔操作を利用する場合の遠隔制御機能を有する。
(Embodiment 7)
A display device and the like according to the seventh embodiment of the present invention will be described with reference to FIG. The display device 1 according to the seventh embodiment has a basic configuration similar to that of the first embodiment, and additionally has a remote control function when a plurality of users simultaneously use remote operation.
 図29の例では、複数のユーザ(例えばユーザA,B,C,D,E)が、会議等の場合に、表示装置1の画面10を視聴している。画面10には、例えばスライド資料等のコンテンツが表示されている。本表示システムは、会議システム等にも活用できる。 In the example of FIG. 29, a plurality of users (for example, users A, B, C, D, and E) are viewing the screen 10 of the display device 1 in a meeting or the like. On the screen 10, for example, content such as slide material is displayed. This display system can also be used for a conference system and the like.
 実施の形態1等と同様に、複数のユーザに対し、それぞれのユーザ毎に、空間100の仮想面が設定される。図29では、省略して一部の仮想面、例えばユーザBの第2仮想面202、ユーザCの第2仮想面202を示す。 As in the first embodiment, a virtual plane of the space 100 is set for each of a plurality of users. In FIG. 29, a part of the virtual surfaces, for example, the second virtual surface 202 of the user B and the second virtual surface 202 of the user C are omitted.
 遠隔操作に関する複数人同時利用の場合に、以下のような制御が可能である。表示装置1は、一人のユーザのみに代表操作者としての操作権限を与える。複数のユーザのうち、例えばユーザBが、先に第1時点で、仮想面空間に対する操作を行った場合を示す。他のユーザは仮想面空間に触れていない。ユーザBの手指が第1仮想面201に進入し、第2仮想面202に対するタッチ操作等を行っている。表示装置1は、最初に仮想面空間に進入したユーザBに、現在の代表操作者として操作権限を与える。ユーザBの操作に対応して、画面10には代表操作者のカーソル291等が表示される。 The following controls are possible when multiple users are using the remote control simultaneously. The display device 1 gives an operation authority as a representative operator to only one user. Of the plurality of users, for example, the case where the user B first performed an operation on the virtual surface space at the first time point is shown. Other users are not touching the virtual surface space. The finger of the user B enters the first virtual surface 201 and performs a touch operation on the second virtual surface 202 or the like. The display device 1 gives the operation authority as the current representative operator to the user B who first enters the virtual plane space. Corresponding to the operation of the user B, a cursor 291 of the representative operator is displayed on the screen 10.
 また、画面10内の一部箇所には、現在操作権限を持ち遠隔操作中の代表操作者であるユーザBを表す画像292が表示される。この画像292は、カメラの撮影画像を用いる。例えば、撮影画像の一部のトリミング画像が用いられる。画像292は、個人認識用の登録済み顔画像としてもよいし、ユーザ毎のアイコンやマーク等の他の情報としてもよい。画像292によって、現在の遠隔操作者が誰であるかの状況が、全ユーザで共通に認識できる。 In addition, an image 292 representing the user B who is the representative operator who has the current operation authority and is performing remote operation is displayed at a part of the screen 10. As this image 292, a photographed image of the camera is used. For example, a trimmed image of a part of the captured image is used. The image 292 may be a registered face image for personal recognition, or may be other information such as an icon or a mark for each user. With the image 292, the situation of who the current remote operator is can be commonly recognized by all users.
 例えば、ユーザB,Cの二人が同時に仮想面の操作を行おうとした場合、二人の遠隔操作を同時に反映すると混乱する可能性がある。そこで、複数のユーザの遠隔操作を同時に反映したくない場合、上記のように単一のユーザのみに操作権限を与える。即ち、早い者勝ち方式で操作権限が与えられる。また、複数のユーザが殆ど同時に遠隔操作をしようとした場合、操作権限が与えられなかったユーザは、自身の遠隔操作が反映されないので、システムの誤作動と誤解する可能性もある。そのため、画面10の一部に、現在操作権限を持ち操作中のユーザを表す画像292を表示する。 For example, when two users B and C try to operate the virtual surface at the same time, there is a possibility of confusion if the remote operations of the two users are reflected at the same time. Therefore, when it is not desired to reflect the remote operations of a plurality of users at the same time, the operation authority is given only to a single user as described above. That is, the operation authority is given by the first-come-first-served basis. In addition, when a plurality of users try to perform remote operation almost simultaneously, a user who has not been given the operation authority may not be reflected in his / her remote operation, and may misunderstand that the system malfunctions. Therefore, an image 292 representing the user who has the current operation authority and is operating is displayed on a part of the screen 10.
 次に第2時点で、例えばユーザCが、ユーザCの仮想面空間に対する操作を行うとする。ユーザCの手指が第1仮想面201を通じて仮想面空間に進入する。この時、ユーザBが代表操作者として操作中であるため、表示装置1は、ユーザCには操作権限を与えず、遠隔操作を無効とする。画面10には、ユーザCの手指の位置に対応するカーソルも表示されない。 Next, at the second time point, for example, it is assumed that the user C performs an operation on the virtual surface space of the user C. The finger of user C enters the virtual surface space through the first virtual surface 201. At this time, since the user B is operating as a representative operator, the display device 1 does not give the operation authority to the user C and invalidates the remote operation. On the screen 10, the cursor corresponding to the position of the finger of the user C is not displayed.
 次に第3時点で、ユーザBの手指が仮想面空間及び空間100から外に出て操作を止めた場合、ユーザBの代表操作者としての操作権限が解除される。その後、同様に、仮想面空間に進入した一人のユーザに代表操作者としての操作権限が与えられる。 Next, when the user B's finger goes out of the virtual plane space and the space 100 and stops the operation at the third time point, the operation authority as the representative operator of the user B is released. Thereafter, similarly, an operation authority as a representative operator is given to one user who has entered the virtual plane space.
 変形例として、以下のような制御を行ってもよい。所定人数までの複数のユーザの同時遠隔操作を許容する。画面10には、ユーザ毎の複数のカーソルを表示する。予め、複数のユーザの各ユーザには、同時遠隔操作が許容される旨が了承される。各ユーザは、必要に応じて順番に遠隔操作を行う。この場合、複数のユーザの手指がそれぞれ仮想面空間に進入した場合には、画面10に複数のカーソルが表示されて表示状態が過剰にはなるが、利用方法によっては有用である。利用方法を踏まえて複数のユーザが協調して利用すれば特に問題無い。例えば共同作業を可能とするアプリにおいて利用してもよい。 As a modification, the following control may be performed. Allows simultaneous remote operation of a plurality of users up to a predetermined number of users. A plurality of cursors for each user are displayed on the screen 10. In advance, it is acknowledged that simultaneous remote operation is allowed for each of a plurality of users. Each user performs remote operations in order as necessary. In this case, when the fingers of a plurality of users enter the virtual surface space, a plurality of cursors are displayed on the screen 10 and the display state becomes excessive, but this is useful depending on the usage method. There are no particular problems if a plurality of users cooperate with each other based on the usage method. For example, it may be used in an application that enables collaborative work.
 複数のユーザが同時に画面を視聴する状況で、各ユーザの遠隔操作を可能とする場合には、どのユーザの遠隔操作を受け付けるか、複数のユーザの同時の遠隔操作をどのように制御するか等も課題となる。実施の形態7によれば、複数のユーザが遠隔操作を行う場合でも、好適な遠隔操作が可能である。 When multiple users are viewing the screen at the same time, when enabling remote control of each user, which user remote control is accepted, how to control simultaneous remote control of multiple users, etc. Is also an issue. According to the seventh embodiment, a suitable remote operation is possible even when a plurality of users perform a remote operation.
 変形例として、以下のような制御を行ってもよい。表示装置1は、複数のユーザの間に、優先順位等を設定してもよい。複数のユーザがそれぞれの仮想面に同時に手指を入れた場合には、優先順位の設定に応じて、例えば一番優先順位が高いユーザのみに操作権限を与える。また、優先順位が低い第1ユーザが遠隔操作中に、それよりも優先順位が高い第2ユーザが仮想面に手指を入れた場合に、操作権限を第1ユーザから第2ユーザへ移してもよい。 As a modification, the following control may be performed. The display device 1 may set priorities among a plurality of users. When a plurality of users put their fingers on each virtual surface at the same time, for example, only the user with the highest priority is given the operation authority according to the priority setting. In addition, when a first user with a lower priority is remotely operated and a second user with a higher priority places a finger on the virtual surface, the operation authority is transferred from the first user to the second user. Good.
 上記のように、実施の形態7によれば、複数のユーザによる遠隔操作の同時利用を円滑に実現できる。 As described above, according to the seventh embodiment, simultaneous use of remote operation by a plurality of users can be realized smoothly.
 [身体連続性の判定]
 表示装置1は、カメラ撮影画像の解析に基づいて、ユーザにおける顔や頭の部分と、手指や腕の部分とを検出する。その際、表示装置1は、ユーザ個人の身体の連続性、同一性を判定する。即ち、例えば、撮影画像内において、顔領域から連続的に腕や手指の領域がつながっているかどうかが判定される。
[Determination of body continuity]
The display device 1 detects a face and a head portion and a finger and arm portion of the user based on the analysis of the camera photographed image. At that time, the display device 1 determines the continuity and identity of the user's individual body. That is, for example, it is determined whether or not the arms and fingers are continuously connected from the face area in the photographed image.
 例えば、図29の複数人同時利用例の場合、遠隔操作の際に、カメラ撮影画像内の顔と、仮想面空間での手指とが、同一ユーザのものではない可能性もある。例えば、ユーザBの横からユーザAが腕を伸ばしてユーザBの仮想面に手指を触れる可能性もある。表示装置1は、このような可能性も考慮して、カメラ撮影画像の解析の際に、上記ユーザ個人の身体の連続性、同一性を判定する。表示装置1は、カメラ撮影画像における顔と手指とが異なるユーザのものであると判定した場合には、遠隔操作を無効とする。 For example, in the case of simultaneous use by a plurality of persons shown in FIG. 29, the face in the camera-captured image and the finger in the virtual plane space may not belong to the same user during remote operation. For example, the user A may extend his arm from the side of the user B and touch the virtual surface of the user B with his / her finger. The display device 1 also considers such a possibility, and determines the continuity and identity of the user's individual body when analyzing the camera-captured image. If the display device 1 determines that the face and fingers in the camera-captured image are from different users, the display device 1 invalidates the remote operation.
 [手に物体を持つ場合]
 各実施の形態の表示システムにおける遠隔操作の際、基本的に、ユーザは、手に何も持たない状態で容易に遠隔操作が可能である。これに限らず、ユーザは、手に棒等の物体を持った状態で遠隔操作することも可能である。この場合、表示システムは、カメラ撮影画像の解析の際に、手指だけでなく、その棒等の物体を検出し、手指とその物体との連続性を判定し、例えばその物体の先端の位置を表す点を点F0として検出する。表示システムは、その物体の色や形状等を検出して制御に利用してもよい。
[When you have an object in your hand]
When performing a remote operation in the display system of each embodiment, basically, a user can easily perform a remote operation with nothing in his hand. However, the present invention is not limited to this, and the user can also perform a remote operation while holding an object such as a stick in his hand. In this case, the display system detects not only a finger but also an object such as a stick during analysis of a camera-captured image, determines the continuity between the finger and the object, for example, determines the position of the tip of the object. A point to be represented is detected as a point F0. The display system may detect the color or shape of the object and use it for control.
 以上、本発明を実施の形態に基づいて具体的に説明したが、本発明は上述した実施の形態に限定されず、その要旨を逸脱しない範囲で種々変更可能である。実施の形態の構成要素の追加や削除、分離や併合、置換、組合せ等が可能である。実施の形態の具体例の数値等は一例である。図面では構成要素を結ぶ線として一部の線を示している。実施の形態の機能等は、一部または全部が集積回路等のハードウェアで実現されてもよいし、ソフトウェアプログラム処理で実現されてもよい。各装置の機能等を構成するソフトウェアは、製品出荷時点で予め装置内に格納されていてもよいし、製品出荷後に外部装置から通信を介して取得されてもよい。実施の形態の表示システム及び表示装置は、テレビ受像機やプロジェクタに限らず、各種の電子機器の操作入力手段として適用可能である。 As mentioned above, although this invention was concretely demonstrated based on embodiment, this invention is not limited to embodiment mentioned above, A various change is possible in the range which does not deviate from the summary. It is possible to add, delete, separate, merge, replace, and combine components of the embodiment. The numerical values in the specific examples of the embodiments are examples. In the drawing, some lines are shown as lines connecting components. A part or all of the functions and the like of the embodiments may be realized by hardware such as an integrated circuit, or may be realized by software program processing. Software constituting functions of each device may be stored in the device in advance at the time of product shipment, or may be acquired from an external device via communication after the product is shipped. The display system and display device of the embodiment are not limited to a television receiver and a projector, and can be applied as operation input means of various electronic devices.
 1…表示装置、10…画面、21,22…カメラ、100…空間、101…第1空間、102…第2空間、103…第3空間、201…第1仮想面、202…第2仮想面、J0…基準軸、P0,Q0~Q8,F0…点。 DESCRIPTION OF SYMBOLS 1 ... Display apparatus, 10 ... Screen, 21, 22 ... Camera, 100 ... Space, 101 ... 1st space, 102 ... 2nd space, 103 ... 3rd space, 201 ... 1st virtual surface, 202 ... 2nd virtual surface , J0: reference axis, P0, Q0 to Q8, F0: point.

Claims (17)

  1.  表示装置の画面に対するユーザの遠隔操作を制御する機能を持つ表示装置であって、
     画面を視聴するユーザを含む範囲を撮影する少なくとも2つのカメラを備え、
     前記2つのカメラの撮影映像の解析によって、前記2つのカメラのカメラ側基準点の位置に対する、前記ユーザの頭、顔、または眼に対応付けられたユーザ側基準点である第1点の位置と、前記ユーザの手指の第2点の位置と、を検出し、
     前記画面と前記第1点とを結ぶ空間内において、前記第1点から前記画面への視聴方向へ所定長の位置に、前記ユーザから見て前記画面に対して重なるように仮想面空間を設定し、
     前記仮想面空間の位置と前記第2点の位置との距離を含む、前記仮想面空間に対する前記手指の進入の度合いを計算し、
     前記進入の度合いに基づいて、前記仮想面空間に対する前記手指のタッチ操作を含む、所定の遠隔操作を判定し、
     前記第2点の位置または前記第2点の位置に対応付けられた前記画面内での位置座標、前記所定の遠隔操作を表す操作情報を含む、操作入力情報を生成し、前記操作入力情報によって前記表示装置の動作を制御する、
     表示装置。
    A display device having a function of controlling a user's remote operation on the screen of the display device,
    Having at least two cameras that capture a range that includes the user viewing the screen,
    A position of a first point that is a user-side reference point associated with the head, face, or eye of the user with respect to a position of a camera-side reference point of the two cameras by analysis of captured images of the two cameras; , Detecting the position of the second point of the user's finger,
    In a space connecting the screen and the first point, a virtual plane space is set at a position of a predetermined length in the viewing direction from the first point to the screen so as to overlap the screen when viewed from the user. And
    Calculating the degree of entry of the finger into the virtual surface space, including the distance between the position of the virtual surface space and the position of the second point;
    Based on the degree of entry, determine a predetermined remote operation including a touch operation of the finger on the virtual surface space,
    Operation input information including operation information representing the position of the second point or the position coordinates in the screen associated with the position of the second point, and the operation information representing the predetermined remote operation is generated, and the operation input information Controlling the operation of the display device;
    Display device.
  2.  請求項1記載の表示装置において、
     前記第2点の位置及び前記進入の度合いに応じて、前記画面に前記手指の存在を表すポインタ画像を表示するように制御し、
     前記操作入力情報として、前記ポインタ画像の表示制御情報を含む、
     表示装置。
    The display device according to claim 1,
    According to the position of the second point and the degree of entry, control to display a pointer image representing the presence of the finger on the screen,
    As the operation input information, including display control information of the pointer image,
    Display device.
  3.  請求項1記載の表示装置において、
     前記進入の度合いとして、前記距離が相対的に長い時には、前記画像を相対的に大きいサイズまたは第1色または第1形状タイプで表示し、前記距離が相対的に短い時には、前記画像を相対的に小さいサイズまたは第2色または第2形状タイプで表示する、
     表示装置。
    The display device according to claim 1,
    When the distance is relatively long, the image is displayed in a relatively large size, a first color, or a first shape type, and when the distance is relatively short, Display in small size or second color or second shape type,
    Display device.
  4.  請求項1記載の表示装置において、
     前記2つのカメラの撮影映像の解析の際に、両眼視差に基づいた距離計測の原理を用いて、前記第1点の位置と、前記第2点の位置と、を検出する、
     表示装置。
    The display device according to claim 1,
    When analyzing the captured images of the two cameras, the position of the first point and the position of the second point are detected using the principle of distance measurement based on binocular parallax.
    Display device.
  5.  請求項2記載の表示装置において、
     前記仮想面空間は、前記第1点から所定長の位置に設定される単一の仮想面を有し、
     前記手指が前記単一の仮想面よりも手前にある場合には、前記画面に前記ポインタ画像を表示し、
     前記手指が前記単一の仮想面以降の奥に進入した場合には、タッチ操作として判定する、
     表示装置。
    The display device according to claim 2, wherein
    The virtual surface space has a single virtual surface set at a position of a predetermined length from the first point,
    When the finger is in front of the single virtual plane, the pointer image is displayed on the screen,
    When the finger has entered the back of the single virtual surface, it is determined as a touch operation.
    Display device.
  6.  請求項2記載の表示装置において、
     前記仮想面空間は、前記第1点から第1長さの位置に設定される第1仮想面と、前記第1点から第2長さの位置に設定される第2仮想面と、を有し、
     前記手指が前記第1仮想面よりも手前にある場合には、前記画面に前記ポインタ画像を表示せず、
     前記手指が前記第1仮想面以降の奥に進入した場合には、前記画面に前記ポインタ画像を表示し、
     前記手指が前記第2仮想面以降の奥に進入した場合には、タッチ操作として判定する、
     表示装置。
    The display device according to claim 2, wherein
    The virtual plane space has a first virtual plane set at a position having a first length from the first point and a second virtual plane set at a position having a second length from the first point. And
    When the finger is in front of the first virtual plane, the pointer image is not displayed on the screen,
    When the finger has entered the back of the first virtual surface or later, the pointer image is displayed on the screen,
    When the finger has entered the depth after the second virtual surface, it is determined as a touch operation.
    Display device.
  7.  請求項2記載の表示装置において、
     前記仮想面空間は、前記第1点から第1長さの位置に設定される第1仮想面と、前記第1点から第2長さの位置に設定される第2仮想面と、を有し、
     前記手指が前記第1仮想面よりも手前にある場合には、前記画面に前記ポインタ画像を表示せず、
     前記手指が前記第1仮想面以降の奥に進入した場合には、前記画面に前記ポインタ画像を表示し、
     前記手指が前記第2仮想面以降の奥に進入した場合には、前記画面に前記ポインタ画像を表示しない、
     表示装置。
    The display device according to claim 2, wherein
    The virtual plane space has a first virtual plane set at a position having a first length from the first point and a second virtual plane set at a position having a second length from the first point. And
    When the finger is in front of the first virtual plane, the pointer image is not displayed on the screen,
    When the finger has entered the back of the first virtual surface or later, the pointer image is displayed on the screen,
    When the finger has entered the back of the second virtual surface or later, the pointer image is not displayed on the screen.
    Display device.
  8.  請求項1記載の表示装置において、
     前記2つのカメラは、前記画面の付近における水平方向で左右の位置に配置される、
     表示装置。
    The display device according to claim 1,
    The two cameras are arranged at left and right positions in the horizontal direction in the vicinity of the screen.
    Display device.
  9.  請求項1記載の表示装置において、
     前記2つのカメラは、前記画面から前方に離れた、水平方向で左右の位置に配置される、
     表示装置。
    The display device according to claim 1,
    The two cameras are arranged at left and right positions in the horizontal direction, away from the screen.
    Display device.
  10.  請求項1記載の表示装置において、
     前記カメラを用いて前記ユーザについての個人認識を行い、
     前記仮想面空間は、前記ユーザが個人として認識できない場合には、前記所定長として標準値を用いて設定され、前記ユーザが個人として認識できる場合には、前記個人に応じた前記所定長のユーザ設定値を用いて調整された位置に設定される、
     表示装置。
    The display device according to claim 1,
    Perform personal recognition of the user using the camera,
    The virtual plane space is set using a standard value as the predetermined length when the user cannot be recognized as an individual, and when the user can be recognized as an individual, the user of the predetermined length according to the individual Set to the adjusted position using the set value,
    Display device.
  11.  請求項1記載の表示装置において、
     前記仮想面空間は、前記ユーザから見た前記画面の全体の領域のうち、一部の領域に限定して重なるように設定され、前記一部の領域では前記遠隔操作を有効とし、他の領域では無効とする、
     表示装置。
    The display device according to claim 1,
    The virtual plane space is set so as to overlap only a part of the entire area of the screen viewed from the user, the remote operation is enabled in the partial area, and the other area Is invalid,
    Display device.
  12.  請求項1記載の表示装置において、
     前記仮想面空間内で、前記手指の位置が、前記画面のGUIのオブジェクト上に重なる位置にある場合に、前記距離が所定値以下になった場合にはタッチ操作と判定し、更に前記タッチ操作の状態から前記距離が所定値よりも大きい値に戻った場合にはタップ操作と判定し、前記タッチ操作または前記タップ操作に応じた対応処理を実行させる、
     表示装置。
    The display device according to claim 1,
    In the virtual plane space, when the position of the finger is at a position overlapping the GUI object on the screen, if the distance is less than or equal to a predetermined value, the touch operation is determined, and the touch operation is further performed. When the distance returns to a value greater than a predetermined value from the state of, it is determined as a tap operation, the corresponding operation according to the touch operation or the tap operation is executed,
    Display device.
  13.  請求項1記載の表示装置において、
     前記仮想面空間内での前記手指の位置として、前記距離が所定値以下になった場合にはタッチ操作と判定し、前記タッチ操作の状態から前記仮想面空間内で前記手指の位置が前記画面に平行な方向で移動した場合に、スワイプ操作と判定し、前記スワイプ操作に応じた対応処理を実行させる、
     表示装置。
    The display device according to claim 1,
    The position of the finger in the virtual surface space is determined to be a touch operation when the distance is equal to or smaller than a predetermined value, and the position of the finger in the virtual surface space is determined from the touch operation state in the virtual surface space. If it moves in a direction parallel to the swipe operation, it is determined that it is a swipe operation, and a corresponding process is executed according to the swipe operation.
    Display device.
  14.  請求項1記載の表示装置において、
     前記仮想面空間内での前記手指の位置として、前記距離が所定値以下になった場合にはタッチ操作と判定し、前記タッチ操作の状態で前記仮想面空間にある複数の指を検出し、前記複数の指の各指の位置が、前記画面に平行な方向で移動した場合に、ピンチ操作と判定し、前記ピンチ操作に応じた対応処理を実行させる、
     表示装置。
    The display device according to claim 1,
    The position of the finger in the virtual surface space is determined as a touch operation when the distance is less than or equal to a predetermined value, and detects a plurality of fingers in the virtual surface space in the state of the touch operation, When the position of each finger of the plurality of fingers moves in a direction parallel to the screen, it is determined as a pinch operation, and a corresponding process is executed according to the pinch operation.
    Display device.
  15.  請求項1記載の表示装置において、
     前記表示装置の周囲の人の有無を検出する人感センサを備え、
     前記人感センサを用いて前記表示装置の周囲の人の有りの検出時に、前記カメラをオン状態にして撮影を開始させ、
     前記カメラの撮影画像に基づいて、前記画面の付近の前記ユーザの存在、または前記ユーザによる所定の動作を検出した時に、前記表示装置の表示機能をオン状態にする、
     表示装置。
    The display device according to claim 1,
    A human sensor for detecting the presence or absence of a person around the display device;
    When the presence of a person around the display device is detected using the human sensor, the camera is turned on to start shooting,
    When the presence of the user in the vicinity of the screen or a predetermined operation by the user is detected based on a captured image of the camera, the display function of the display device is turned on.
    Display device.
  16.  請求項1記載の表示装置において、
     前記ユーザとして複数のユーザが同時に利用する場合に、前記ユーザ毎に前記仮想面空間を設定し、前記複数のユーザにおける複数の仮想面空間のうち、先にいずれかの仮想面空間に手指を進入した一人のユーザのみに、前記遠隔操作の権限を与え、
     前記画面の一部に、現在時点で前記権限を持つユーザを表す情報を表示する、
     表示装置。
    The display device according to claim 1,
    When a plurality of users simultaneously use as the user, the virtual surface space is set for each user, and a finger is first entered into one of the plurality of virtual surface spaces of the plurality of users. Only one user who gave the remote control authority,
    Displaying information representing the user having the authority at the present time on a part of the screen;
    Display device.
  17.  表示装置に接続され、前記表示装置の画面に対するユーザの遠隔操作を制御する機能を持つ遠隔操作制御装置であって、
     前記画面を視聴するユーザを含む範囲を撮影する少なくとも2つのカメラを備え、
     前記2つのカメラの撮影映像の解析によって、前記2つのカメラのカメラ側基準点の位置に対する、前記ユーザの頭、顔、または眼に対応付けられたユーザ側基準点である第1点の位置と、前記ユーザの手指の第2点の位置と、を検出し、
     前記画面と前記第1点とを結ぶ空間内において、前記第1点から前記画面への視聴方向へ所定長の位置に、前記ユーザから見て前記画面に対して重なるように仮想面空間を設定し、
     前記仮想面空間の位置と前記第2点の位置との距離を含む、前記仮想面空間に対する前記手指の進入の度合いを計算し、
     前記進入の度合いに基づいて、前記仮想面空間に対する前記手指のタッチ操作を含む、所定の遠隔操作を判定し、
     前記第2点の位置または前記第2点の位置に対応付けられた前記画面内での位置座標、前記所定の遠隔操作を表す操作情報を含む、操作入力情報を生成し、前記操作入力情報によって前記表示装置の動作を制御する、
     遠隔操作制御装置。
    A remote operation control device connected to a display device and having a function of controlling a user's remote operation on the screen of the display device,
    Comprising at least two cameras for capturing a range including a user viewing the screen;
    A position of a first point that is a user-side reference point associated with the head, face, or eye of the user with respect to a position of a camera-side reference point of the two cameras by analysis of captured images of the two cameras; , Detecting the position of the second point of the user's finger,
    In a space connecting the screen and the first point, a virtual plane space is set at a position of a predetermined length in the viewing direction from the first point to the screen so as to overlap the screen when viewed from the user. And
    Calculating the degree of entry of the finger into the virtual surface space, including the distance between the position of the virtual surface space and the position of the second point;
    Based on the degree of entry, determine a predetermined remote operation including a touch operation of the finger on the virtual surface space,
    Operation input information including operation information representing the position of the second point or the position coordinates in the screen associated with the position of the second point, and the operation information representing the predetermined remote operation is generated, and the operation input information Controlling the operation of the display device;
    Remote control device.
PCT/JP2016/082467 2016-11-01 2016-11-01 Display device and remote operation controller WO2018083737A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/082467 WO2018083737A1 (en) 2016-11-01 2016-11-01 Display device and remote operation controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/082467 WO2018083737A1 (en) 2016-11-01 2016-11-01 Display device and remote operation controller

Publications (1)

Publication Number Publication Date
WO2018083737A1 true WO2018083737A1 (en) 2018-05-11

Family

ID=62075661

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/082467 WO2018083737A1 (en) 2016-11-01 2016-11-01 Display device and remote operation controller

Country Status (1)

Country Link
WO (1) WO2018083737A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020149152A (en) * 2019-03-11 2020-09-17 株式会社デンソーテン Control device and control method
JP2020170311A (en) * 2019-04-02 2020-10-15 船井電機株式会社 Input device
JP2021182665A (en) * 2020-05-18 2021-11-25 東日本電信電話株式会社 Information processing device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003316510A (en) * 2002-04-23 2003-11-07 Nippon Hoso Kyokai <Nhk> Display device for displaying point instructed on display screen and display program
JP2010205223A (en) * 2009-03-06 2010-09-16 Seiko Epson Corp System and device for control following gesture for virtual object
JP2011039844A (en) * 2009-08-12 2011-02-24 Shimane Prefecture Image recognition device, operation decision method and program
JP2011164681A (en) * 2010-02-04 2011-08-25 Sharp Corp Device, method and program for inputting character and computer-readable recording medium recording the same
JP2012137989A (en) * 2010-12-27 2012-07-19 Sony Computer Entertainment Inc Gesture operation input processor and gesture operation input processing method
JP2014071672A (en) * 2012-09-28 2014-04-21 Shimane Prefecture Information input device, and information input method
JP5784245B2 (en) * 2012-11-30 2015-09-24 日立マクセル株式会社 Video display device, setting change method thereof, and setting change program
JP5896578B2 (en) * 2012-11-22 2016-03-30 シャープ株式会社 Data input device
JP2016099917A (en) * 2014-11-26 2016-05-30 レノボ・シンガポール・プライベート・リミテッド Method for performing action corresponding to pointing gesture, conference support system, and computer program
JP2016134022A (en) * 2015-01-20 2016-07-25 エヌ・ティ・ティ アイティ株式会社 Virtual touch panel pointing system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003316510A (en) * 2002-04-23 2003-11-07 Nippon Hoso Kyokai <Nhk> Display device for displaying point instructed on display screen and display program
JP2010205223A (en) * 2009-03-06 2010-09-16 Seiko Epson Corp System and device for control following gesture for virtual object
JP2011039844A (en) * 2009-08-12 2011-02-24 Shimane Prefecture Image recognition device, operation decision method and program
JP2011164681A (en) * 2010-02-04 2011-08-25 Sharp Corp Device, method and program for inputting character and computer-readable recording medium recording the same
JP2012137989A (en) * 2010-12-27 2012-07-19 Sony Computer Entertainment Inc Gesture operation input processor and gesture operation input processing method
JP2014071672A (en) * 2012-09-28 2014-04-21 Shimane Prefecture Information input device, and information input method
JP5896578B2 (en) * 2012-11-22 2016-03-30 シャープ株式会社 Data input device
JP5784245B2 (en) * 2012-11-30 2015-09-24 日立マクセル株式会社 Video display device, setting change method thereof, and setting change program
JP2016099917A (en) * 2014-11-26 2016-05-30 レノボ・シンガポール・プライベート・リミテッド Method for performing action corresponding to pointing gesture, conference support system, and computer program
JP2016134022A (en) * 2015-01-20 2016-07-25 エヌ・ティ・ティ アイティ株式会社 Virtual touch panel pointing system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020149152A (en) * 2019-03-11 2020-09-17 株式会社デンソーテン Control device and control method
JP2020170311A (en) * 2019-04-02 2020-10-15 船井電機株式会社 Input device
JP7400205B2 (en) 2019-04-02 2023-12-19 船井電機株式会社 input device
JP2021182665A (en) * 2020-05-18 2021-11-25 東日本電信電話株式会社 Information processing device

Similar Documents

Publication Publication Date Title
US11470377B2 (en) Display apparatus and remote operation control apparatus
US10606441B2 (en) Operation control device and operation control method
US8693732B2 (en) Computer vision gesture based control of a device
US8818027B2 (en) Computing device interface
US10338776B2 (en) Optical head mounted display, television portal module and methods for controlling graphical user interface
US20120056989A1 (en) Image recognition apparatus, operation determining method and program
US20110080337A1 (en) Image display device and display control method thereof
JP7369834B2 (en) display device
US20140053115A1 (en) Computer vision gesture based control of a device
RU2598598C2 (en) Information processing device, information processing system and information processing method
US10276133B2 (en) Projector and display control method for displaying split images
JP2012238293A (en) Input device
WO2018083737A1 (en) Display device and remote operation controller
JP2007086995A (en) Pointing device
JP2016126687A (en) Head-mounted display, operation reception method, and operation reception program
KR20120136719A (en) The method of pointing and controlling objects on screen at long range using 3d positions of eyes and hands
JP2021005168A (en) Image processing apparatus, imaging apparatus, control method of image processing apparatus, and program
CN104780298A (en) Camera device with no image display function
US20160320897A1 (en) Interactive display system, image capturing apparatus, interactive display method, and image capturing method
JP6170457B2 (en) Presentation management device and presentation management program
JP2003283865A (en) Apparatus controller

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16920646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16920646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP