WO2022191276A1 - Operation input device, operation input method, and program - Google Patents

Operation input device, operation input method, and program Download PDF

Info

Publication number
WO2022191276A1
WO2022191276A1 PCT/JP2022/010548 JP2022010548W WO2022191276A1 WO 2022191276 A1 WO2022191276 A1 WO 2022191276A1 JP 2022010548 W JP2022010548 W JP 2022010548W WO 2022191276 A1 WO2022191276 A1 WO 2022191276A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
distance
camera
touch panel
dimensional data
Prior art date
Application number
PCT/JP2022/010548
Other languages
French (fr)
Japanese (ja)
Inventor
堪亮 坂本
Original Assignee
株式会社ネクステッジテクノロジー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ネクステッジテクノロジー filed Critical 株式会社ネクステッジテクノロジー
Priority to JP2023505628A priority Critical patent/JP7452917B2/en
Publication of WO2022191276A1 publication Critical patent/WO2022191276A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means

Definitions

  • the present invention relates to an operation input device, an operation input method, and a program for inputting information related to operator's operations.
  • a non-contact operation input device is attracting attention as a means that can lighten the operator's operation burden and that an information processing device can be operated even during work at a surgical site, a cooking site, or the like.
  • Such an operation input device includes, for example, a camera that photographs an indicator such as an operator's finger, measures the three-dimensional position of the indicator, and performs an operation input to an information processing apparatus based on the three-dimensional position.
  • a camera that photographs an indicator such as an operator's finger, measures the three-dimensional position of the indicator, and performs an operation input to an information processing apparatus based on the three-dimensional position.
  • the interactive projector described in Patent Document 1 uses two cameras to detect the contact of a self-luminous indicator on the projection screen based on the light emission pattern, and to detect the contact of the non-luminous indicator on the projection screen by a position detection unit. It is explained that the detection accuracy of the contact of the pointer to the screen surface can be improved by detecting by .
  • the present invention has been made in view of the above circumstances, and aims to provide an operation input device, an operation input method, and a program capable of accurately detecting non-contact operation input with a simple configuration.
  • the operation input device of the present invention includes: a three-dimensional data generation unit that generates three-dimensional data based on output data of a real camera that is fixed with respect to a display surface of a display means and whose line-of-sight direction is inclined with respect to the pointing direction of the indicator;
  • the first straight line is rotated by a predetermined angle about a point on the first straight line extending in the line-of-sight direction of the real camera and located at a predetermined first distance from the real camera.
  • a coordinate transformation unit for transforming the three-dimensional data in the coordinate space of the real camera into virtual three-dimensional data in the coordinate space of the virtual camera, assuming a virtual camera whose viewing direction is the direction of the second straight line;
  • the virtual touch panel is calculated based on the virtual three-dimensional data, and the
  • a third distance which is the shortest distance from the virtual camera to the pointer, is compared with the second distance from the virtual camera to the virtual touch panel, and the third distance is equal to or less than the second distance
  • a contact determination unit that determines that the pointer has come into contact with the virtual touch panel
  • an operation input signal generation unit that generates an operation input signal based on the virtual three-dimensional data when the contact determination unit determines that the pointer has made contact.
  • FIG. 2 is a block diagram showing the hardware configuration of the operation input system according to Embodiment 1;
  • FIG. 2 is a functional block diagram showing the functional configuration of the operation input device according to Embodiment 1;
  • FIG. FIG. 2 is a schematic side view of display means, a real camera, and a virtual touch panel according to Embodiment 1;
  • FIG. 4 is a diagram showing the positional relationship between a real camera and a virtual camera; 4 is a flowchart of operation input processing according to Embodiment 1;
  • FIG. 4 is a diagram showing installation positions of real cameras;
  • FIG. 10 is a schematic side view of display means, a real camera, a virtual touch panel, and a hover surface according to Embodiment 2;
  • FIG. 11 is a perspective view showing a schematic configuration of display means, a real camera, a virtual touch panel, and an instruction effective area according to Embodiment 3; 10 is a flowchart of operation input processing according to Embodiment 3; FIG. 14 is a diagram showing the arrangement of real cameras according to Embodiment 4; It is the schematic which looked at the display means, the transparent plate, and the virtual touch panel which concern on other embodiment from the side.
  • the operation input system 1 is an information processing system that performs processing based on an operation input signal generated by determining an operation using an operator's finger or other indicator.
  • FIG. 1 is a block diagram showing the hardware configuration of an operation input system 1 according to the first embodiment.
  • the operation input system 1 comprises a real camera 10 and an operation input device 20, as shown in FIG.
  • the real camera 10 is a 3D camera (3 Dimensional Camera) that outputs data representing 3D information. Any conventional method may be used to obtain the three-dimensional information, and for example, a stereo method may be used.
  • the real camera 10 is preferably capable of wide-angle photography, and is equipped with a wide-angle FOV 84° lens, for example.
  • a dedicated camera built into the operation input device 20 may be used.
  • the operation input device 20 is an arbitrary information processing terminal, and may be a general-purpose information processing terminal such as a personal computer, a smartphone, or a tablet-type terminal in which a program for operation input processing is installed, or a dedicated terminal. As shown in FIG. 1, the operation input device 20 includes a CPU (Central Processing Unit) 21, a RAM (Random Access Memory) 22, a ROM (Read Only Memory) 23, a communication interface 24, a storage unit 25, a display Means 26 are provided.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • the operation input system 1 detects the operator's virtual touch operation on a virtual touch panel imaginary in front of the display surface of the display means 26 of the operation input device 20 based on the output data of the real camera 10 .
  • the display means 26 of the operation input device 20 is an arbitrary display device that displays information such as images and characters.
  • the display means 26 may be a tangible display including a liquid crystal display or an organic EL (Electro-Luminescence) display built in or external to a personal computer, a smartphone, a tablet terminal, etc. It may be an intangible or fluid type display device such as a display.
  • the indicator is an object that can be moved by the operator in order to specify the position on the display screen of the display means 26, and has an elongated shape extending in the pointing direction.
  • the indicator is, for example, an operator's finger or a pointing stick.
  • the real camera 10 is fixed with respect to the display surface of the display means 26 of the operation input device 20 .
  • the line-of-sight direction of the real camera 10 (the direction toward the center of the field of view) is inclined with respect to the pointing direction of the indicator.
  • the display means 26 is a liquid crystal display of a personal computer and the real camera 10 is fixed to the end of the liquid crystal display
  • the line-of-sight direction is the direction toward the area in front of the center of the display surface of the display means 26. Slanted with respect to the direction perpendicular to the plane.
  • the display means 26 is a liquid crystal display of a personal computer and the real camera 10 is fixed to the central upper end portion of the liquid crystal display will be described.
  • the line-of-sight direction of the real camera 10 is obliquely downward with respect to the direction perpendicular to the display surface of the display means 26 .
  • the CPU 21 of the operation input device 20 controls each component of the operation input device 20, and executes each process including the operation input process by executing the programs stored in the ROM 23 and the storage unit 25.
  • the RAM 22 is a memory in which data can be read and written at high speed, and temporarily stores data output from the real camera 10 and data read from the storage unit 25 for data processing executed by the CPU 21 .
  • the ROM 23 is a read-only memory that stores programs for processing executed by the CPU 21, setting values, and the like.
  • the communication interface 24 is an interface through which data is transmitted and received with the real camera 10 .
  • the storage unit 25 is a large-capacity storage device, and is composed of a flash memory or the like.
  • the storage unit 25 stores the output data of the real camera 10 and the data generated by the processing of the CPU 21 .
  • the storage unit 25 further stores programs executed by the CPU 21 .
  • the display means 26 displays information such as images and characters generated by the CPU 21 .
  • the CPU 21 and the RAM 22 By executing the operation input processing program stored in the storage unit 25, the CPU 21 and the RAM 22, as shown in FIG. It functions as a unit 214 , a pointer detection unit 215 , a contact determination unit 216 , and an operation input signal generation unit 217 .
  • the data acquisition unit 211 acquires output data of the real camera 10 .
  • the three-dimensional data generation unit 212 develops the output data acquired by the data acquisition unit 211 in a three-dimensional coordinate space to generate three-dimensional data.
  • the output data format of the real camera 10 may be any format, and the 3D data generation unit 212 executes data processing according to the output data format of the real camera 10 in order to generate 3D data in a predetermined format. do.
  • the output data of the real camera 10 may be directly used as three-dimensional data.
  • the parameter acquisition unit 213 acquires parameters related to the position and orientation of the real camera 10 and the position and size of the virtual touch panel.
  • the parameters are determined in advance by an initial setting input by the administrator of the operation input device 20 .
  • the coordinate conversion unit 214 converts the three-dimensional data generated based on the output data of the real camera 10 into virtual three-dimensional data corresponding to the virtual touch panel.
  • a virtual camera is assumed in which the real camera 10 is virtually moved for conversion into virtual three-dimensional data. That is, the coordinate transformation unit 214 transforms the three-dimensional data in the coordinate space of the real camera 10 into virtual three-dimensional data in the coordinate space of the virtual camera.
  • the pointer detection unit 215 detects the point closest to the virtual camera as the tip of the pointer based on the virtual three-dimensional data.
  • the contact determination unit 216 determines whether or not the tip of the pointer detected by the pointer detection unit 215 has come closer to the display unit 26 than the virtual touch panel.
  • FIG. 3 is a schematic diagram of the real camera 10, the display means 26, and the virtual touch panel 311 viewed from the side.
  • FIG. 4 is a diagram showing the positional relationship between the real camera 10 and the virtual camera 312. As shown in FIG.
  • the 3D data generation unit 212 generates 3D data based on the output data of the real camera 10 acquired by the data acquisition unit 211 .
  • the three-dimensional data generator 212 can generate three-dimensional data including the pointer 321.
  • the plane on which the virtual touch panel 311 extends is defined as an extension plane P
  • the plane parallel to the extension plane P that passes through a reference point that is the reference of the three-dimensional coordinates of the real camera 10 is defined as a reference plane B.
  • the distance between the extension plane P and the reference plane B is Pz.
  • the distance Pz is a value within a range that is determined according to specifications such as the viewing angle of the real camera 10 or the three-dimensional coordinate measurable range, and is determined by the administrator's input of initial settings.
  • the real camera 10 is fixed to the central upper end portion of the display means 26, and is oriented obliquely downward.
  • the line-of-sight direction of the real camera 10 is E
  • the direction passing through the center of the virtual touch panel 311 and perpendicular to the extension plane P is V
  • the angle formed by the direction E and the direction V is ⁇ .
  • is a parameter input by the administrator according to the orientation of the real camera 10. The administrator assumes the position and size of the virtual touch panel 311, and calculates the distance Pz from the reference plane B to the virtual touch panel 311 along with the angle ⁇ . to decide.
  • Transformation from the three-dimensional data of the real camera 10 to virtual three-dimensional data is performed assuming a virtual camera 312 in which the real camera 10 is rotated and translated, as shown in FIG.
  • the line-of-sight direction of the virtual camera 312 is defined as a direction V that passes through the center of the virtual touch panel 311 and is perpendicular to the virtual touch panel 311 .
  • the reference point for determining the virtual three-dimensional data of the virtual camera 312 is on the reference plane B.
  • S be the distance between the reference point of the real camera 10 and the specific point 313 when the intersection of the directions E and V is the specific point 313 .
  • the parameters that the parameter acquisition unit 213 acquires by the administrator's input are the angle ⁇ for defining the line-of-sight direction of the real camera 10 and the distance Pz for defining the position of the virtual touch panel 311 shown in FIGS. and a distance S for defining a particular point 313 about which the rotation from the real camera 10 to the virtual camera 312 is centered.
  • the administrator also considers the performance of the real camera 10 and determines the parameters. By setting these parameters, it is possible to specify a conversion method from three-dimensional data to virtual three-dimensional data.
  • the first straight line A virtual camera 312 is assumed to have a line-of-sight direction V of a second straight line rotated by an angle ⁇ . Then, the three-dimensional data in the coordinate space of the real camera 10 is coordinate-transformed into virtual three-dimensional data in the coordinate space of the virtual camera 312 .
  • the virtual touch panel 311 is separated from the reference point of the virtual camera 312 by Pz (second distance) in a direction parallel to the second straight line.
  • the shape of the virtual touch panel 311 may be any shape as long as it can correspond to the display surface of the display means 26 .
  • the second straight line passes through the rectangular center point of the virtual touch panel 311, and the virtual touch panel 311 extends in a direction perpendicular to the second straight line.
  • the second straight line passes through the reference point of virtual camera 312 and the center of virtual touch panel 311 .
  • the coordinate conversion performed by the coordinate conversion unit 214 will be described.
  • the three-dimensional data generated by the three-dimensional data generation unit 212 based on the output data of the real camera 10 is, as shown in FIG. It is represented by coordinates (x, y, z) with the direction as the x-axis.
  • the coordinate space of the real camera 10 is rotated around a straight line passing through the specific point 313 and parallel to the x-axis, the line-of-sight direction V of the virtual camera 312 is taken as the z'-axis, and the horizontal direction of the virtual touch panel is taken as the x-axis.
  • a coordinate space is obtained, represented by the coordinates (x, y', z').
  • the camera position is located at B' behind the reference plane B, so the camera position is translated along the direction V to the reference plane B.
  • the translation distance Dz is S-Scos ⁇ .
  • the three-dimensional data in the coordinate space represented by (x, y, z) based on the output data of the real camera 10 is rotated and translated by the virtual camera 312 ( x, y', z'') can be converted into virtual three-dimensional data in a coordinate space.
  • the pointer detection unit 215 detects the point closest to the virtual camera 312 as the tip of the pointer 321 based on the virtual three-dimensional data converted by the coordinate conversion unit 214 .
  • the tip of the indicator 321 is the pointing position 322 of the operator.
  • the pointer detection unit 215 outputs virtual three-dimensional coordinates of the pointing position 322 based on the virtual three-dimensional data.
  • the contact determination unit 216 determines the z′′ direction (the direction V ), and the contact determination unit 216 compares the distance Vz with the distance Pz (second distance) in the z′′ direction from the virtual camera 312 to the virtual touch panel.
  • the contact determination unit 216 determines the distance Vz, which is the shortest distance from the reference plane B passing through the reference point of the virtual camera 312 to the pointing position 322 of the indicator 321, the distance Pz from the reference plane B to the virtual touch panel, compare.
  • Vz the distance from the reference plane B passing through the reference point of the virtual camera 312 to the pointing position 322 of the indicator 321, the distance Pz from the reference plane B to the virtual touch panel, compare.
  • Vz the indicated position 322 of the pointer 321 can be said to be at the same position as the virtual touch panel 311 or closer to the virtual camera 312 than the virtual touch panel 311 .
  • Vz>Pz it is determined that there is no input from the pointer 321 to the virtual touch panel 311 .
  • the operation input signal generating unit 217 detects the operator's operation based on the virtual three-dimensional coordinates of the pointing position 322 at that time. generates an operation input signal indicated by .
  • the operation input signal generation unit 217 generates an operation input signal for displaying a cursor at a position corresponding to the virtual three-dimensional coordinates of the pointing position 322 on the display screen of the display means 26 .
  • the operation input signal generation unit 217 After that, the operation input signal generation unit 217 generates an operation input signal for moving the cursor, selecting, moving, etc. according to the information on the temporal change of the pointing position 322 output by the contact determination unit 216 . Further, in an information processing terminal including the operation input device 20 in which an arbitrary application is installed in advance, an operation input signal for instructing execution of an application indicated by an icon displayed on the display means 26 is generated.
  • FIG. 5 is a flowchart showing operation input processing.
  • the operation input process starts when the administrator of the operation input system 1 executes the operation input program.
  • the parameter acquisition unit 213 displays a parameter input screen on the display means 26.
  • the administrator hypothesizes the position and orientation of the virtual touch panel 311 and inputs parameters for specifying the virtual touch panel 311 .
  • the parameter acquisition unit 213 acquires parameters input in advance by the administrator (step S101).
  • the first parameter acquired by the parameter acquisition unit 213 is the angle ⁇ of the line-of-sight direction E of the real camera 10 with respect to the direction V perpendicular to the extension plane P of the virtual touch panel 311 .
  • the second parameter is a value obtained when a specific point 313 is an intersection of a second straight line passing through the center of the virtual touch panel 311 and extending in the direction V and a first straight line extending in the line-of-sight direction E of the real camera 10 . It is the distance S from the reference point of the camera 10 to the specific point 313 .
  • the third parameter is the distance Pz from the reference plane B to the virtual touch panel 311 when the plane passing through the reference point of the real camera 10 and parallel to the virtual touch panel 311 is taken as the reference plane B.
  • the parameters acquired by parameter acquisition unit 213 may include other parameters for specifying virtual touch panel 311 .
  • the coordinate transformation unit 214 determines a method of coordinate transformation from the three-dimensional data of the coordinate space of the real camera 10 to the virtual three-dimensional data of the coordinate space of the virtual camera 312 (step S102). ).
  • the virtual camera 312 is a camera virtualized at a position where the real camera 10 is rotated by an angle ⁇ around the specific point 313 and translated along the direction V to the reference plane B.
  • a coordinate transformation method for transforming the three-dimensional data of the coordinate space of the real camera 10 into virtual three-dimensional data of the coordinate space of the virtual camera 312 is determined (step S102).
  • the data acquisition unit 211 acquires the output data of the real camera 10 (step S103).
  • the three-dimensional data generator 212 develops the output data of the real camera 10 in a three-dimensional coordinate space to generate three-dimensional data. (Step S104: 3D data generation step).
  • the coordinate transformation unit 214 transforms the three-dimensional data in the coordinate space of the real camera 10 into virtual three-dimensional data in the coordinate space of the virtual camera 312 using the coordinate transformation method determined in step S102 (step S105: coordinates conversion step).
  • the pointer detection unit 215 detects the point closest to the virtual camera 312 as the pointed position 322 of the pointer 321 based on the virtual three-dimensional data, and calculates the virtual three-dimensional coordinates of the pointed position 322 .
  • the contact determination unit 216 calculates the shortest distance Vz from the reference plane B to the indicated position 322 based on the virtual three-dimensional coordinates of the indicated position 322 (step S106).
  • the contact determination unit 216 compares the distance Pz from the reference plane B to the virtual touch panel 311 with the distance Vz to the indicated position 322 (step S107), and determines whether or not the distance Vz is equal to or less than the distance Pz (step S108: contact determination step).
  • step S108: Yes If the distance Vz is equal to or less than the distance Pz (step S108: Yes), the coordinates on the virtual touch panel 311 are calculated based on the virtual three-dimensional coordinates of the pointing position 322 obtained in step S106, and the operation input signal generation unit 217 (step S109). If the distance Vz is greater than the distance Pz (step S108: No), the process returns to step S103.
  • step S110: Yes After outputting the coordinates on the virtual touch panel 311 obtained in step S109, if the administrator issues an instruction to end the operation input process (step S110: Yes), the process ends. If there is no end command (step S110: No), the process returns to step S103.
  • the operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel 311 obtained in step S109 (operation input signal generation step) and outputs it.
  • the operation input signal is a signal for instructing selection, movement, or the like by moving a cursor, or a signal for instructing execution of an application installed in advance.
  • the operation input device 20 converts the three-dimensional data based on the output data of the real camera 10 into virtual three-dimensional data by the virtual camera 312, and compares the pointing position 322 with the position of the virtual touch panel 311. , the touch operation of the pointer 321 on the virtual touch panel 311 is detected.
  • the real camera 10 is fixed with respect to the display means 26 with the line-of-sight direction directed in a direction inclined with respect to the pointing direction of the indicator 321 .
  • a three-dimensional data generation unit 212 of the operation input device 20 generates three-dimensional data based on the output data of the real camera 10 .
  • the coordinate transformation unit 214 assumes a virtual camera 312 whose line-of-sight direction is a direction V obtained by rotating the line-of-sight direction E of the real camera 10 by an angle ⁇ about a specific point 313, and converts three-dimensional data from the real camera 10 into virtual camera 312. 312 into virtual three-dimensional data.
  • the pointer detection unit 215 detects the point closest to the virtual camera 312 as the pointing position 322, and the contact determination unit 216 moves away from the virtual camera 312 in the direction V by the distance Pz.
  • the distance Vz from the virtual camera 312 to the pointing position 322 in the direction V is compared with the distance Pz from the virtual camera 312 to the virtual touch panel 311, and if the distance Vz is the distance Pz or less, It is determined that the pointer 321 touches the virtual touch panel 311 at some point. This makes it possible to accurately detect non-contact operation input with a simple configuration in which the real camera 10 is installed with the line-of-sight direction inclined.
  • the coordinate transformation unit 214 determines the coordinate transformation method, by changing the direction of rotation from the real camera 10 to the virtual camera 312 , the 3D data from the real camera 10 at another location can also be converted to the virtual 3D data from the virtual camera 312 . It can be transformed into dimensional data.
  • the rotation angle ⁇ is set from the real camera 10 to the virtual camera 312 with a straight line parallel to the x-axis as the rotation axis, and the coordinate transformation represented by this rotation is performed.
  • the position of the real camera 11 in FIG. and coordinate transformation represented by this rotation is performed.
  • a rotation angle with a straight line parallel to the x-axis as a rotation axis and a rotation angle with a straight line parallel to the y-axis as a rotation axis are set, and a straight line parallel to the x-axis is set.
  • Coordinate transformation represented by rotation about a rotation axis and rotation about a straight line parallel to the y-axis may be performed.
  • the rotation from the real camera 10 to the virtual camera 312 is rotation about a straight line passing through the specific point 313 and parallel to the extending direction of the virtual touch panel 311. Performs coordinate transformation represented by a rotation of . Thereby, a virtual touch operation on the virtual touch panel 311 can be detected regardless of the installation position of the real camera 10 .
  • the position of the real camera 10 can capture the pointer 321 from a direction that is inclined with respect to the pointing direction of the pointer 321, the operator operating toward the virtual touch panel 311 as well as the outer periphery of the virtual touch panel 311 It may be up, down, left, right, or behind.
  • the angle ⁇ to a value of 90° or more, coordinate conversion can be performed as appropriate, and a virtual touch operation on the virtual touch panel 311 can be detected.
  • FIG. 7 is a schematic side view of the display unit 26, the real camera 10, the virtual touch panel 311, and the hover surface 315 according to the second embodiment.
  • the operation input system 1 according to the second embodiment has the same configuration as that of the first embodiment, and executes the same operation input process. The difference is that the hover plane 315 is assumed.
  • the hover surface 315 is a surface parallel to the virtual touch panel 311 and is located at a predetermined distance Hz from the virtual touch panel 311 .
  • the distance Hz between the virtual touch panel 311 and the hover surface 315 can be set by the administrator, and is a value between 5 cm and 10 cm, for example.
  • the functions and operations of the data acquisition unit 211, the three-dimensional data generation unit 212, the coordinate conversion unit 214, and the pointer detection unit 215 are the same as in the first embodiment.
  • the parameter acquisition unit 213 acquires the distance Hz from the virtual touch panel 311 to the hover plane 315 in addition to the parameter angle ⁇ , the distance S, and the distance Pz in the first embodiment.
  • the contact determination unit 216 determines the distance from the reference plane B to the hover plane 315 (fourth distance: (Pz+Hz )).
  • the operation input signal generation unit 217 displays a display indicating that the virtual touch panel 311 is approaching. Generates an operation signal to perform.
  • the operation input signal generator 217 may cause the display means 26 to display a cursor when the distance Vz becomes equal to or less than the distance (Pz+Hz). This allows the operator to recognize the current pointing position. Further, the size or shape of the cursor may be changed to display the distance to the virtual touch panel 311 recognizable. For example, as the pointing position 322 crosses over the hover surface 315 and approaches the virtual touch panel 311, the cursor may be made smaller step by step, or the color may be made darker step by step.
  • the operator can visually recognize the distance to the touch panel, but in the case of the virtual touch panel 311 in space, the distance to the virtual touch panel 311 cannot be visually recognized.
  • the instruction position 322 approaches the virtual touch panel 311 and the hover surface 315 is displayed before the operator touches the virtual touch panel 311 .
  • the area where the cursor is located and the distance to the virtual touch panel can be recognized.
  • the positional error when the virtual touch panel 311 is touched can be reduced. Thereby, operability in space can be improved.
  • the hover surface 315 is assumed to be spaced apart from the virtual touch panel 311 in the direction opposite to the display surface of the display means 26, and the contact determination unit 216 instructs Passage of hover surface 315 is determined prior to touching virtual touch panel 311 at location 322 . Then, the operation input signal generation unit 217 generates an operation signal for displaying that the pointed position 322 is approaching the virtual touch panel 311 . This makes it possible for the operator to recognize the area where the pointed position 322 exists and the distance to the virtual touch panel 311 .
  • FIG. 8 is a perspective view showing a schematic configuration of display means 26, real camera 10, virtual touch panel 311, and instruction effective area 330 according to the third embodiment.
  • FIG. 9 is a flowchart of operation input processing according to the third embodiment.
  • the operation input system 1 according to Embodiment 3 has the same configuration as Embodiments 1 and 2, but the functions of the parameter acquisition section 213 and the pointer detection section 215 are partially different.
  • the operation input device 20 assumes an instruction effective area 330 in which an instruction by the indicator 321 is valid, and detects only the indicator 321 inside the instruction effective area 330. It is valid, and the indicator 321 is not detected outside the indicator effective area 330 .
  • the instruction effective area 330 is an area limited from the space area in front of the display means 26, as shown in FIG.
  • the shape of the boundary of the instruction effective area 330 is arbitrary, and, for example, as shown in FIG. In addition, it may have a cylindrical shape or an elliptical cylindrical shape whose side surface is perpendicular to the virtual touch panel 311 , or a shape in which the area expands or narrows as the distance from the virtual touch panel 311 increases or decreases. In the present embodiment, a case where the boundary of instruction effective area 330 is a rectangular parallelepiped will be described.
  • a hover surface 315 similar to that in the second embodiment may be further included in the instruction effective area 330, and it is determined whether or not the pointer 321 has passed through the hover surface 315 before the pointing position 322 of the pointer 321 touches the virtual touch panel 311. You may make it
  • the parameter acquisition unit 213 acquires parameters for specifying the instruction effective area 330 in addition to parameters for specifying the virtual touch panel 311 .
  • the parameters for specifying the virtual touch panel 311 are the angle ⁇ of the line-of-sight direction E of the real camera 10 as in Embodiment 1, the distance S from the reference point of the real camera 10 to the specific point 313, and the distance S from the reference plane B It is the distance Pz to the virtual touch panel 311 .
  • the parameter for specifying the instruction effective area 330 is an arbitrary parameter indicating the boundary of the instruction effective area 330 in the coordinate space of the virtual camera 312.
  • the virtual three-dimensional coordinate value indicating the boundary of the instruction effective area 330 is The upper and lower bounds or coefficients of a three-dimensional function representing the bounding surface of the indication effective area 330 .
  • the parameters acquired by parameter acquisition section 213 may include other parameters for specifying virtual touch panel 311 or instruction effective area 330 .
  • the pointer detection unit 215 detects the closest object from the virtual camera 312 based on the virtual three-dimensional data within the instruction effective area 330 specified by the parameter, among the virtual three-dimensional data from the virtual camera 312 output by the coordinate transformation unit 214 . A point is detected as the indicated position 322 of the pointer 321 .
  • operation input device 20 Other configurations of the operation input device 20 are the same as in the first and second embodiments. Operation input processing of the operation input device 20 configured in this manner will be described with reference to the flowchart shown in FIG. In FIG. 9, the processes assigned the same reference numerals as those in FIG. 5 are the same as those in the first embodiment.
  • the parameter acquisition unit 213 displays a parameter input screen on the display means 26.
  • the administrator hypothesizes the position and orientation of the virtual touch panel 311 and inputs parameters for specifying the virtual touch panel 311 and parameters for specifying the instruction effective area 330 .
  • the parameter acquisition unit 213 acquires parameters input in advance by the administrator (step S201).
  • the parameters for specifying the virtual touch panel 311 acquired by the parameter acquisition unit 213 are the angle ⁇ of the line-of-sight direction E of the real camera 10 as in Embodiment 1, and the distance from the reference point of the real camera 10 to the specific point 313. S and a distance Pz from the reference plane B to the virtual touch panel 311 .
  • a parameter for specifying the instruction effective area 330 is a parameter indicating the boundary of the instruction effective area 330 in the coordinate space of the virtual camera 312 .
  • the parameters acquired by parameter acquisition section 213 may include other parameters for specifying virtual touch panel 311 or instruction effective area 330 .
  • the coordinate transformation unit 214 determines a method of coordinate transformation from the three-dimensional data in the coordinate space of the real camera 10 to the virtual three-dimensional data in the coordinate space of the virtual camera 312. Determine (step S102).
  • the data acquisition unit 211 acquires the output data of the real camera 10 (step S103).
  • the three-dimensional data generator 212 develops the output data of the real camera 10 in a three-dimensional coordinate space to generate three-dimensional data. (Step S104: 3D data generation step).
  • step S105 coordinates conversion step
  • the pointer detection unit 215 excludes the data outside the instruction effective area 330 specified based on the parameters acquired in step S201 from the virtual three-dimensional data coordinate-transformed in step S105 (step S202). .
  • the pointer detection unit 215 detects the point closest to the virtual camera 312 as the designated position 322 of the pointer 321, Calculate virtual three-dimensional coordinates. Then, the contact determination unit 216 calculates the shortest distance Vz from the reference plane B to the indicated position 322 based on the virtual three-dimensional coordinates of the indicated position 322 (step S106).
  • the contact determination unit 216 compares the distance Pz from the reference plane B to the virtual touch panel 311 with the distance Vz to the indicated position 322 (step S107), and determines whether or not the distance Vz is equal to or less than the distance Pz (step S108: contact determination step).
  • step S108: Yes If the distance Vz is equal to or less than the distance Pz (step S108: Yes), the coordinates on the virtual touch panel 311 are calculated based on the virtual three-dimensional coordinates of the pointing position 322 obtained in step S106, and the operation input signal generation unit 217 (step S109). If the distance Vz is greater than the distance Pz (step S108: No), the process returns to step S103.
  • step S110: Yes After outputting the coordinates on the virtual touch panel 311 obtained in step S109, if the administrator issues an instruction to end the operation input process (step S110: Yes), the process ends. If there is no end command (step S110: No), the process returns to step S103.
  • the detailed processing of steps S106 to S110 is the same as that of the first embodiment.
  • the operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel 311 obtained in step S109 (operation input signal generation step) and outputs it.
  • the operation input signal is a signal for instructing selection, movement, or the like by moving a cursor, or a signal for instructing execution of an application installed in advance.
  • the operation input device 20 detects data in the instruction effective area 330 from among the data obtained by converting the three-dimensional data based on the output data of the real camera 10 into the virtual three-dimensional data by the virtual camera 312. By comparing the indicated position 322 and the position of the virtual touch panel 311, the touch operation of the pointer 321 on the virtual touch panel 311 is detected.
  • the third embodiment In some display means 26, only a part of the display means 26 is targeted for instruction operation, and other operation units 331 such as card insertion and bar code reader are provided. In such a case, the operation input device 20 must avoid erroneously detecting an operation to the other operation unit 331 as an instruction operation.
  • the instruction effective area 330 is set, and only the instruction position 322 of the indicator 321 existing inside the instruction effective area 330 is detected. It is possible to accurately detect a non-contact operation input without erroneously detecting as.
  • the pointer detection unit 215 of the operation input device 20 detects the virtual camera 312 based on the data within the instruction effective area 330 of the virtual three-dimensional data obtained by the virtual camera 312 .
  • the point closest to 312 is detected as designated position 322, and contact determination unit 216 compares distance Vz from virtual camera 312 to designated position 322 with distance Pz from virtual camera 312 to virtual touch panel 311, It is determined that the pointer 321 has touched the virtual touch panel 311 when the distance Vz is equal to or less than the distance Pz. This makes it possible to accurately detect a non-contact operation input without erroneously detecting another operation by the operator outside the instruction effective area 330 .
  • FIG. 10 is a diagram showing the arrangement of real cameras according to the fourth embodiment.
  • the operation input system 1 according to Embodiment 4 executes the same operation input processing as in Embodiment 1, but differs in that two or more real cameras including real cameras 12 and 13 are used. Although the number of real cameras may be any number of two or more, in the following description, a configuration including real cameras 12 and 13 will be described.
  • the operator may perform operation input using two or more indicators. For example, by touching the touch panel with two fingers at the same time, various operation inputs are possible depending on the distance between the two fingers or the moving direction and moving distance of the two fingers.
  • the operation input device 20 according to the fourth embodiment can reliably detect an operation input to the virtual touch panel 311 even when there are two or more pointers 321 .
  • the first real camera 12 and the second real camera 13 are fixed at locations separated from each other.
  • the first real camera 12 is installed at the upper right end of the display means 26 and the second real camera 13 is installed at the upper left end of the display means 26 . That is, the first real camera 12 is positioned at the upper right side of the virtual touch panel 311 , and the second real camera 13 is positioned at the upper left side of the virtual touch panel 311 .
  • the line-of-sight directions of the real cameras 12 and 13 are directions inclined with respect to the pointing direction of the indicator 321 and different from each other.
  • the configuration of the operation input device 20 is the same as that of the first embodiment.
  • An administrator of the operation input device 20 selects a master camera from among the two or more real cameras 12 and 13, and sets parameters for the master camera.
  • the parameter obtaining unit 213 obtains an angle ⁇ x and a specific The angle ⁇ y about a straight line passing through the point 313 and parallel to the y -axis, the distance S from the real camera 12 to the specific point 313, and the distance Pz from the reference plane B to the virtual touch panel 311 are obtained.
  • the 3D data generation unit 212 generates 3D data based on the output data of the two or more real cameras 12 and 13 and the position information of the two or more real cameras 12 and 13 . Note that the three-dimensional data generation unit 212 performs calibration for generating a single three-dimensional data for two or more real cameras 12 and 13 in advance.
  • the three-dimensional data generation unit 212 expands the output data of the real camera 12, which is the master camera, into a three-dimensional coordinate space to generate three-dimensional data.
  • the three-dimensional data generator 212 generates three-dimensional data based on the output data of the real camera 13, which is a slave camera. This is three-dimensional data that has been calibrated and corrected based on the relationship and output data.
  • the three-dimensional data generator 212 supplements the three-dimensional data based on the output data of the real camera 12 with calibrated three-dimensional data based on the output data of the real camera 13 to generate a single three-dimensional data. .
  • the coordinate transformation unit 214 rotates the line-of-sight direction E of the real camera 12 around the specific point 313 by an angle ⁇ x about a straight line parallel to the x-axis and an angle ⁇ y about a straight line parallel to the y-axis. is the line-of-sight direction V, and a virtual camera 312 having a reference point on the reference plane B is assumed. Then, the coordinate transformation unit 214 determines a coordinate transformation method for transforming the three-dimensional data based on the output data of the real cameras 12 and 13 generated by the three-dimensional data generation unit 212 into virtual three-dimensional data by the virtual camera 312 .
  • the coordinate transformation unit 214 transforms the three-dimensional data generated by the three-dimensional data generation unit 212 into virtual three-dimensional data using the determined coordinate transformation method.
  • the contact determination unit 216 hypothesizes a surface that is separated from the virtual camera 312 by a distance Pz in the direction V and that is perpendicular to the direction V as the virtual touch panel 311 . Then, the distance Vz from the virtual camera 312 to the pointers 323 and 324 in the direction V is calculated based on the virtual three-dimensional data coordinate-transformed from the three-dimensional data from the real cameras 12 and 13 .
  • the contact determination unit 216 uses the virtual camera 312 in the direction V calculated based on the virtual three-dimensional data. to the pointers 323 and 324 is detected as one or more distances Vz (third distance). In other words, the contact determination unit 216 plots the distances from the virtual camera 312 to points in the three-dimensional space based on the virtual three-dimensional data, and detects points indicating the minimum distance as indicated positions of the indicators 323 and 324 .
  • the contact determination unit 216 compares one or more distances Vz with the distance Pz from the virtual camera 312 to the virtual touch panel 311, and when the distance Vz is equal to or less than the distance Pz, the pointer 323, 324 determines that it has touched.
  • the effect of using two or more real cameras 12 and 13 will be described.
  • the pointer 324 is behind the pointer 323 and cannot be detected.
  • the three-dimensional data is supplemented by the output data of the real camera 13 with different installation positions and line-of-sight directions.
  • the virtual three-dimensional coordinates of the indicated position on the body 324 can be output.
  • the virtual three-dimensional coordinates of the pointed position are output based on the output data of two or more real cameras 12 and 13, the problem of failing to detect the pointed position can be avoided.
  • the operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel of the pointing position determined by the contact determination unit 216 to be in contact. At this time, when contact at two or more different designated positions is determined, operation input signals corresponding to two or more preset contacts are generated.
  • the operation input system 1 is fixed apart from each other, and the line-of-sight directions are tilted with respect to the pointing directions of the indicators 323 and 324 and are different from each other.
  • Two or more real cameras 12 and 13 are provided.
  • a three-dimensional data generation unit 212 generates three-dimensional data based on the output data of two or more real cameras 12 and 13, and a coordinate transformation unit 214 performs a coordinate transformation method based on parameters related to the real camera 12, which is a master camera. is used to convert to virtual three-dimensional data.
  • the contact determination unit 216 acquires the distance Vz, which is one or more minimum values of the distances from the virtual camera 312 to the pointers 323 and 324 in the direction V calculated based on the virtual three-dimensional data, and obtains one or more instruction points.
  • the indicated positions of the bodies 323 and 324 are to be detected.
  • the pointer 324 that has not been detected by one of the real cameras 12 can also be detected by the other real camera 13, and reliable operation input by a plurality of touches is possible.
  • the real camera 10-13 is installed at the end of the display means 26, and the administrator sets parameters related to coordinate transformation. It may be built in the input device 20 . In this case, when the operation input device 20 is assembled, the line-of-sight direction of the real camera 10-13 is tilted with respect to the direction perpendicular to the display surface of the display means 26, and the coordinate transformation is determined according to the tilt angle. method may be used.
  • a transparent plate 316 may be further installed in front of the display means 26 and the real camera 10, as shown in FIG.
  • the transparent plate 316 is an arbitrary transparent plate that transmits the light output by the display means 26, such as a glass plate or an acrylic plate.
  • the display means 26 and the real camera 10 can be protected, and the operator can be urged to operate in a space a certain distance away.
  • one rectangular and flat virtual touch panel 311 corresponding to one rectangular display surface of the display means 26 is used to detect an operation input.
  • the size is arbitrary, and it does not have to be flat.
  • the three-dimensional data obtained by the real cameras 10-13 are subjected to coordinate transformation by two or more virtual cameras 312, and each virtual An operation input to the touch panel 311 may be detected.
  • the parameter acquisition unit 213 acquires parameters including the shape and size of the virtual touch panel 311, and the coordinate conversion unit 214 converts three-dimensional data according to the parameters representing the shape and size of the virtual touch panel 311. Coordinate transformation including enlargement, reduction or transformation of the coordinate values of
  • two or more real cameras 12 and 13 are used to detect one or more indicated positions.
  • a distance Vz that is one or more minimum values of the distance from the camera 312 to the indicator 321 may be acquired, and the indicated position of one or more indicators 321 may be detected.
  • a computer-readable CD-ROM Compact Disc Read-Only Memory
  • DVD Digital Versatile Disc
  • MO Magnetic Optical Disc
  • a computer capable of realizing each function may be configured by storing and distributing the program in a recording medium of the above and installing the program in the computer.
  • OS Operating System
  • an application or by cooperation between the OS and an application, only portions other than the OS may be stored in the recording medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

In the present invention, a real camera 10 is fixed in place so that a sight line direction is inclined with respect to an indicated direction of an indicating body 321. A three-dimensional data generation unit 212 of an operation input device 20 generates three-dimensional data on the basis of output data of the real camera 10. A coordinate conversion unit 214: assumes a virtual camera 312 with a sight line direction rotated by an angle of θ from the sight line direction of the real camera 10, using a specific point 313 as the center; and converts the three-dimensional data of the real camera 10 to virtual three-dimensional data of the virtual camera 31. A contact determination unit 216 determines that the indicating body 321 has contacted a virtual touch panel 311 if the shortest distance from the virtual camera 321 to the indicating body 321 calculated on the basis of the virtual three-dimensional data is not greater than the distance from the virtual camera 312 to the virtual touch panel 311.

Description

操作入力装置、操作入力方法及びプログラムOperation input device, operation input method and program
 本発明は、オペレータの操作に係る情報を入力する操作入力装置、操作入力方法及びプログラムに関する。 The present invention relates to an operation input device, an operation input method, and a program for inputting information related to operator's operations.
 近年、オペレータが非接触で情報処理装置へ操作入力することのできる操作入力装置が多く開発されている。非接触の操作入力装置は、オペレータの操作負担を軽くすることができ、また手術現場、調理現場等において作業の途中でも情報処理装置を操作できる手段として注目されている。 In recent years, many operation input devices have been developed that allow operators to input operations to information processing devices without contact. A non-contact operation input device is attracting attention as a means that can lighten the operator's operation burden and that an information processing device can be operated even during work at a surgical site, a cooking site, or the like.
 このような操作入力装置は、例えば、カメラでオペレータの指等の指示体を撮影して指示体の3次元位置を計測し、その3次元位置に基づいて情報処理装置へ操作入力するものがある(例えば、特許文献1)。 Such an operation input device includes, for example, a camera that photographs an indicator such as an operator's finger, measures the three-dimensional position of the indicator, and performs an operation input to an information processing apparatus based on the three-dimensional position. (For example, Patent Document 1).
 特許文献1に記載のインタラクティブプロジェクターは、2つのカメラを用いて、自発光指示体の投写画面への接触は発光パターンに基づいて検出し、非発光指示体の投写画面への接触は位置検出部によって検出することにより、指示体のスクリーン面への接触の検出精度を向上することができると説明されている。 The interactive projector described in Patent Document 1 uses two cameras to detect the contact of a self-luminous indicator on the projection screen based on the light emission pattern, and to detect the contact of the non-luminous indicator on the projection screen by a position detection unit. It is explained that the detection accuracy of the contact of the pointer to the screen surface can be improved by detecting by .
特開2016-186676号公報JP 2016-186676 A
 特許文献1に記載のインタラクティブプロジェクターは、指示体の3次元位置を検出するために、2台のカメラを頭上に設置して、カメラの視線方向を鉛直下方向に向ける必要がある。この構成を、通常の情報処理装置に適用する場合には、ディスプレイ手前の上方にカメラを設置する必要があり、大規模な構成になり設置工数が増え、またオペレータの操作の邪魔になることもあった。 In the interactive projector described in Patent Document 1, in order to detect the three-dimensional position of the indicator, it is necessary to install two cameras above the head and direct the camera line-of-sight direction vertically downward. When this configuration is applied to a normal information processing device, it is necessary to install the camera above the front of the display. there were.
 一方、従来のパソコンやタブレット端末等の情報処理装置には、ディスプレイの端部に画面に対して垂直方向に撮影するカメラを備えているものがある。このようなカメラを利用した場合、ディスプレイに近い領域での指示体の先端の3次元位置は、誤差が大きくなり、指示体が示す場所を正確に特定することができないという問題があった。 On the other hand, some conventional information processing devices such as personal computers and tablet terminals are equipped with a camera at the edge of the display that takes pictures in the direction perpendicular to the screen. When such a camera is used, there is a problem that the three-dimensional position of the tip of the pointer in the area near the display has a large error, and the position indicated by the pointer cannot be specified accurately.
 本発明は、上記実情に鑑みてなされたものであり、簡易な構成で非接触の操作入力を正確に検出することのできる操作入力装置、操作入力方法及びプログラムを提供することを目的とする。 The present invention has been made in view of the above circumstances, and aims to provide an operation input device, an operation input method, and a program capable of accurately detecting non-contact operation input with a simple configuration.
 上記目的を達成するため、本発明の操作入力装置は、
 表示手段の表示面に対して固定され、視線方向が指示体の指示方向に対して傾斜した方向である実カメラの出力データに基づいて、3次元データを生成する3次元データ生成部と、
 前記実カメラの前記視線方向に延びた第1直線上の点であって、前記実カメラから予め定めた第1距離にある特定点を中心として、前記第1直線を予め定めた角度だけ回転した第2直線の方向を視線方向とする仮想カメラを想定して、前記実カメラによる座標空間の前記3次元データを、前記仮想カメラによる座標空間の仮想3次元データに変換する座標変換部と、
 前記仮想カメラから前記第2直線に平行な方向に予め定めた第2距離だけ離れた面を仮想タッチパネルとしたとき、前記仮想3次元データに基づいて算出した、前記第2直線に平行な方向における前記仮想カメラから前記指示体までの最短距離である第3距離と、前記仮想カメラから前記仮想タッチパネルまでの前記第2距離と、を比較し、前記第3距離が前記第2距離以下であるときに、前記仮想タッチパネルに前記指示体が接触したことを判定する接触判定部と、
 前記接触判定部により前記指示体が接触したと判定されたときに、前記仮想3次元データに基づく操作入力信号を生成する操作入力信号生成部と、を備えることを特徴とする。
In order to achieve the above object, the operation input device of the present invention includes:
a three-dimensional data generation unit that generates three-dimensional data based on output data of a real camera that is fixed with respect to a display surface of a display means and whose line-of-sight direction is inclined with respect to the pointing direction of the indicator;
The first straight line is rotated by a predetermined angle about a point on the first straight line extending in the line-of-sight direction of the real camera and located at a predetermined first distance from the real camera. a coordinate transformation unit for transforming the three-dimensional data in the coordinate space of the real camera into virtual three-dimensional data in the coordinate space of the virtual camera, assuming a virtual camera whose viewing direction is the direction of the second straight line;
When a surface separated from the virtual camera by a predetermined second distance in a direction parallel to the second straight line is a virtual touch panel, the virtual touch panel is calculated based on the virtual three-dimensional data, and the When a third distance, which is the shortest distance from the virtual camera to the pointer, is compared with the second distance from the virtual camera to the virtual touch panel, and the third distance is equal to or less than the second distance a contact determination unit that determines that the pointer has come into contact with the virtual touch panel;
an operation input signal generation unit that generates an operation input signal based on the virtual three-dimensional data when the contact determination unit determines that the pointer has made contact.
 本発明によれば、3次元データを生成するためのカメラを、視線方向を傾けて設置した簡易な構成で、非接触の操作入力を正確に検出することが可能になる。 According to the present invention, it is possible to accurately detect non-contact operation inputs with a simple configuration in which a camera for generating three-dimensional data is installed with the line-of-sight direction inclined.
実施の形態1に係る操作入力システムのハードウェア構成を示すブロック図である。2 is a block diagram showing the hardware configuration of the operation input system according to Embodiment 1; FIG. 実施の形態1に係る操作入力装置の機能構成を示す機能ブロック図である。2 is a functional block diagram showing the functional configuration of the operation input device according to Embodiment 1; FIG. 実施の形態1に係る表示手段、実カメラ及び仮想タッチパネルを側方から見た概要図である。FIG. 2 is a schematic side view of display means, a real camera, and a virtual touch panel according to Embodiment 1; 実カメラと仮想カメラの位置関係を表した図である。FIG. 4 is a diagram showing the positional relationship between a real camera and a virtual camera; 実施の形態1に係る操作入力処理のフローチャートである。4 is a flowchart of operation input processing according to Embodiment 1; 実カメラの設置位置を示した図である。FIG. 4 is a diagram showing installation positions of real cameras; 実施の形態2に係る表示手段、実カメラ、仮想タッチパネル及びホバー面を側方から見た概要図である。FIG. 10 is a schematic side view of display means, a real camera, a virtual touch panel, and a hover surface according to Embodiment 2; 実施の形態3に係る表示手段、実カメラ、仮想タッチパネル及び指示有効領域の概要構成を示す斜視図である。FIG. 11 is a perspective view showing a schematic configuration of display means, a real camera, a virtual touch panel, and an instruction effective area according to Embodiment 3; 実施の形態3に係る操作入力処理のフローチャートである。10 is a flowchart of operation input processing according to Embodiment 3; 実施の形態4に係る実カメラの配置を示す図である。FIG. 14 is a diagram showing the arrangement of real cameras according to Embodiment 4; 他の実施の形態に係る表示手段、透明板及び仮想タッチパネルを側方から見た概要図である。It is the schematic which looked at the display means, the transparent plate, and the virtual touch panel which concern on other embodiment from the side.
(実施の形態1)
 本発明の実施の形態1について図面を参照して詳細に説明する。
(Embodiment 1)
Embodiment 1 of the present invention will be described in detail with reference to the drawings.
 本実施の形態に係る操作入力システム1は、オペレータの指等の指示体を用いた操作を判別して生成した操作入力信号に基づく処理を行う情報処理システムである。図1は、本実施の形態1に係る操作入力システム1のハードウェア構成を示すブロック図である。操作入力システム1は、図1に示すように、実カメラ10と操作入力装置20とからなる。 The operation input system 1 according to the present embodiment is an information processing system that performs processing based on an operation input signal generated by determining an operation using an operator's finger or other indicator. FIG. 1 is a block diagram showing the hardware configuration of an operation input system 1 according to the first embodiment. The operation input system 1 comprises a real camera 10 and an operation input device 20, as shown in FIG.
 実カメラ10は3次元情報を示すデータを出力する3Dカメラ(3 Dimensional Camera)である。3次元情報の取得方式は従来の任意の方式でよく、例えば、ステレオ方式が用いられる。実カメラ10は、広角で撮影可能なものが好ましく、例えば、広角のFOV84°のレンズを備える。本実施の形態1では、操作入力装置20に外部接続する実カメラ10について説明するが、操作入力装置20に内蔵された専用カメラであってもよい。 The real camera 10 is a 3D camera (3 Dimensional Camera) that outputs data representing 3D information. Any conventional method may be used to obtain the three-dimensional information, and for example, a stereo method may be used. The real camera 10 is preferably capable of wide-angle photography, and is equipped with a wide-angle FOV 84° lens, for example. Although the actual camera 10 externally connected to the operation input device 20 will be described in the first embodiment, a dedicated camera built into the operation input device 20 may be used.
 操作入力装置20は、任意の情報処理端末であり、操作入力処理のプログラムがインストールされたパソコン、スマートフォン及びタブレット型端末等の汎用の情報処理端末でもよく、専用端末でもよい。操作入力装置20は、図1に示すように、CPU(Central Processing Unit:中央処理装置)21、RAM(Random Access Memory)22、ROM(Read Only Memory)23、通信インターフェース24、記憶部25、表示手段26を備える。 The operation input device 20 is an arbitrary information processing terminal, and may be a general-purpose information processing terminal such as a personal computer, a smartphone, or a tablet-type terminal in which a program for operation input processing is installed, or a dedicated terminal. As shown in FIG. 1, the operation input device 20 includes a CPU (Central Processing Unit) 21, a RAM (Random Access Memory) 22, a ROM (Read Only Memory) 23, a communication interface 24, a storage unit 25, a display Means 26 are provided.
 操作入力システム1は、操作入力装置20の表示手段26の表示面の手前に仮想した仮想タッチパネルに対する、オペレータの仮想タッチ操作を、実カメラ10の出力データに基づいて検出するものである。 The operation input system 1 detects the operator's virtual touch operation on a virtual touch panel imaginary in front of the display surface of the display means 26 of the operation input device 20 based on the output data of the real camera 10 .
 操作入力装置20の表示手段26は、画像、文字等の情報表示を行う任意の表示装置である。表示手段26は、パソコン、スマートフォン及びタブレット型端末等に内蔵又は外付けされる液晶ディスプレイ又は有機EL(Electro-Luminescence)ディスプレイを含む有形のディスプレイでもよく、ホログラムディスプレイ、ウォーターディスプレイ、眼鏡型ディスプレイ及びバーチャルディスプレイ等の無形又は流体形の表示装置でもよい。 The display means 26 of the operation input device 20 is an arbitrary display device that displays information such as images and characters. The display means 26 may be a tangible display including a liquid crystal display or an organic EL (Electro-Luminescence) display built in or external to a personal computer, a smartphone, a tablet terminal, etc. It may be an intangible or fluid type display device such as a display.
 指示体は、表示手段26の表示画面における位置を特定するために、オペレータが移動させることのできる物であって、指示方向に延在する細長形状を有する。指示体は、例えば、オペレータの指又は指示棒である。以下、実カメラ10と操作入力装置20の配置及び機能構成について詳細に説明する。 The indicator is an object that can be moved by the operator in order to specify the position on the display screen of the display means 26, and has an elongated shape extending in the pointing direction. The indicator is, for example, an operator's finger or a pointing stick. The arrangement and functional configuration of the real camera 10 and the operation input device 20 will be described in detail below.
 実カメラ10は、操作入力装置20の表示手段26の表示面に対して固定されている。また、実カメラ10の視線方向(視野の中心に向かう方向)は、指示体の指示方向に対して傾斜している。例えば、表示手段26がパソコンの液晶ディスプレイであって、実カメラ10を液晶ディスプレイの端部に固定する場合の視線方向は、表示手段26の表示面中央の前方の領域に向かう方向であり、表示面に垂直の方向に対して傾斜している。本実施の形態1の以下の説明では、表示手段26がパソコンの液晶ディスプレイであって、実カメラ10を液晶ディスプレイの中央上端部に固定した場合について説明する。この場合の実カメラ10の視線方向は、表示手段26の表示面に垂直の方向に対して斜め下方である。 The real camera 10 is fixed with respect to the display surface of the display means 26 of the operation input device 20 . Also, the line-of-sight direction of the real camera 10 (the direction toward the center of the field of view) is inclined with respect to the pointing direction of the indicator. For example, when the display means 26 is a liquid crystal display of a personal computer and the real camera 10 is fixed to the end of the liquid crystal display, the line-of-sight direction is the direction toward the area in front of the center of the display surface of the display means 26. Slanted with respect to the direction perpendicular to the plane. In the following description of the first embodiment, the case where the display means 26 is a liquid crystal display of a personal computer and the real camera 10 is fixed to the central upper end portion of the liquid crystal display will be described. In this case, the line-of-sight direction of the real camera 10 is obliquely downward with respect to the direction perpendicular to the display surface of the display means 26 .
 操作入力装置20のCPU21は、操作入力装置20の各構成部の制御を行うとともに、ROM23、記憶部25に保存されているプログラムを実行することにより、操作入力処理を含む各処理を実行する。RAM22は、高速にデータの読み書きが可能なメモリであり、実カメラ10が出力するデータ及び記憶部25から読み出したデータ等をCPU21が実行するデータ処理のために一時保存する。ROM23は、CPU21が実行する処理のプログラムや、設定値などを記憶する読み出し専用メモリである。 The CPU 21 of the operation input device 20 controls each component of the operation input device 20, and executes each process including the operation input process by executing the programs stored in the ROM 23 and the storage unit 25. The RAM 22 is a memory in which data can be read and written at high speed, and temporarily stores data output from the real camera 10 and data read from the storage unit 25 for data processing executed by the CPU 21 . The ROM 23 is a read-only memory that stores programs for processing executed by the CPU 21, setting values, and the like.
 通信インターフェース24は、実カメラ10とのデータの送受信が行われるインターフェースである。記憶部25は、大容量の記憶装置であり、フラッシュメモリ等から構成される。記憶部25は、実カメラ10の出力データ及びCPU21の処理により生成したデータを記憶する。また、記憶部25は、さらにCPU21が実行するプログラムを記憶する。表示手段26は、CPU21が生成する画像、文字等の情報を表示する。 The communication interface 24 is an interface through which data is transmitted and received with the real camera 10 . The storage unit 25 is a large-capacity storage device, and is composed of a flash memory or the like. The storage unit 25 stores the output data of the real camera 10 and the data generated by the processing of the CPU 21 . In addition, the storage unit 25 further stores programs executed by the CPU 21 . The display means 26 displays information such as images and characters generated by the CPU 21 .
 CPU21及びRAM22は、記憶部25に記憶している操作入力処理のプログラムを実行することにより、図2に示すように、データ取得部211、3次元データ生成部212、パラメータ取得部213、座標変換部214、指示体検出部215、接触判定部216、操作入力信号生成部217として機能する。 By executing the operation input processing program stored in the storage unit 25, the CPU 21 and the RAM 22, as shown in FIG. It functions as a unit 214 , a pointer detection unit 215 , a contact determination unit 216 , and an operation input signal generation unit 217 .
 データ取得部211は、実カメラ10の出力データを取得する。3次元データ生成部212は、データ取得部211が取得した出力データを3次元座標空間に展開し、3次元データを生成する。実カメラ10の出力データ形式は任意の形式でよく、3次元データ生成部212は、予め定めた形式の3次元データを生成するために、実カメラ10の出力データ形式に応じたデータ処理を実行する。実カメラ10の出力データをそのまま3次元データとしてもよい。 The data acquisition unit 211 acquires output data of the real camera 10 . The three-dimensional data generation unit 212 develops the output data acquired by the data acquisition unit 211 in a three-dimensional coordinate space to generate three-dimensional data. The output data format of the real camera 10 may be any format, and the 3D data generation unit 212 executes data processing according to the output data format of the real camera 10 in order to generate 3D data in a predetermined format. do. The output data of the real camera 10 may be directly used as three-dimensional data.
 パラメータ取得部213は、実カメラ10の位置及び向き、並びに、仮想タッチパネルの位置及び大きさに係るパラメータを取得する。パラメータは、操作入力装置20の管理者による初期設定の入力により事前に決定される。 The parameter acquisition unit 213 acquires parameters related to the position and orientation of the real camera 10 and the position and size of the virtual touch panel. The parameters are determined in advance by an initial setting input by the administrator of the operation input device 20 .
 座標変換部214は、実カメラ10の出力データに基づいて生成した3次元データを、仮想タッチパネルに対応した仮想3次元データに変換する。仮想3次元データへの変換に際して、実カメラ10を仮想的に移動させた仮想カメラを想定する。つまり、座標変換部214は、実カメラ10による座標空間の3次元データを、仮想カメラによる座標空間の仮想3次元データに変換する。 The coordinate conversion unit 214 converts the three-dimensional data generated based on the output data of the real camera 10 into virtual three-dimensional data corresponding to the virtual touch panel. A virtual camera is assumed in which the real camera 10 is virtually moved for conversion into virtual three-dimensional data. That is, the coordinate transformation unit 214 transforms the three-dimensional data in the coordinate space of the real camera 10 into virtual three-dimensional data in the coordinate space of the virtual camera.
 指示体検出部215は、仮想3次元データに基づいて、仮想カメラに最も近い点を、指示体の先端として検出する。接触判定部216は、指示体検出部215が検出した指示体の先端が、仮想タッチパネルよりも表示手段26に近づいたか否かにより、仮想タッチパネルへの接触を判定する。 The pointer detection unit 215 detects the point closest to the virtual camera as the tip of the pointer based on the virtual three-dimensional data. The contact determination unit 216 determines whether or not the tip of the pointer detected by the pointer detection unit 215 has come closer to the display unit 26 than the virtual touch panel.
 ここで、3次元データ生成部212、パラメータ取得部213、座標変換部214、指示体検出部215及び接触判定部216の詳細について、図3,4を用いて説明する。図3は、実カメラ10、表示手段26及び仮想タッチパネル311を側方から見た概要図である。図4は、実カメラ10と仮想カメラ312の位置関係を表した図である。 Details of the three-dimensional data generation unit 212, the parameter acquisition unit 213, the coordinate conversion unit 214, the pointer detection unit 215, and the contact determination unit 216 will now be described with reference to FIGS. FIG. 3 is a schematic diagram of the real camera 10, the display means 26, and the virtual touch panel 311 viewed from the side. FIG. 4 is a diagram showing the positional relationship between the real camera 10 and the virtual camera 312. As shown in FIG.
 3次元データ生成部212は、データ取得部211が取得した実カメラ10の出力データに基づいて、3次元データを生成する。実カメラ10による3次元の座標空間に指示体321が存するとき、3次元データ生成部212は、指示体321を含む3次元データを生成できる。 The 3D data generation unit 212 generates 3D data based on the output data of the real camera 10 acquired by the data acquisition unit 211 . When the pointer 321 exists in the three-dimensional coordinate space of the real camera 10, the three-dimensional data generator 212 can generate three-dimensional data including the pointer 321. FIG.
 図3において、仮想タッチパネル311が延在する面を延在面Pとし、実カメラ10の3次元座標の基準となる基準点を通り、延在面Pに平行な面を基準面Bとしたとき、延在面Pと基準面Bとの距離をPzとする。距離Pzは、実カメラ10の視野角又は3次元座標計測可能範囲等の仕様に応じて定まる範囲内の値であり、管理者による初期設定の入力により決定される。 In FIG. 3, when the plane on which the virtual touch panel 311 extends is defined as an extension plane P, and the plane parallel to the extension plane P that passes through a reference point that is the reference of the three-dimensional coordinates of the real camera 10 is defined as a reference plane B. , the distance between the extension plane P and the reference plane B is Pz. The distance Pz is a value within a range that is determined according to specifications such as the viewing angle of the real camera 10 or the three-dimensional coordinate measurable range, and is determined by the administrator's input of initial settings.
 本実施の形態1では、実カメラ10は、表示手段26の中央上端部に固定されており、視線方向を斜め下方に向けている。実カメラ10の視線方向をEとし、仮想タッチパネル311の中心を通り延在面Pに垂直の方向をVとしたとき、方向Eと方向Vがなす角度をθとする。θは管理者が実カメラ10の向きに応じて入力するパラメータであり、管理者は、仮想タッチパネル311の位置及び大きさを仮想して、角度θと共に基準面Bから仮想タッチパネル311までの距離Pzを決定する。 In Embodiment 1, the real camera 10 is fixed to the central upper end portion of the display means 26, and is oriented obliquely downward. When the line-of-sight direction of the real camera 10 is E, and the direction passing through the center of the virtual touch panel 311 and perpendicular to the extension plane P is V, the angle formed by the direction E and the direction V is θ. θ is a parameter input by the administrator according to the orientation of the real camera 10. The administrator assumes the position and size of the virtual touch panel 311, and calculates the distance Pz from the reference plane B to the virtual touch panel 311 along with the angle θ. to decide.
 実カメラ10の3次元データから仮想3次元データへの変換は、図4に示すように、実カメラ10を回転及び平行移動させた仮想カメラ312を想定して行う。仮想カメラ312の視線方向は、仮想タッチパネル311の中心を通り仮想タッチパネル311に垂直の方向である方向Vであると定義する。また、仮想カメラ312の仮想3次元データを決めるための基準点は、基準面B上にあると定義する。方向Eと方向Vの交点を特定点313としたとき、実カメラ10の基準点と特定点313との距離をSとする。  Transformation from the three-dimensional data of the real camera 10 to virtual three-dimensional data is performed assuming a virtual camera 312 in which the real camera 10 is rotated and translated, as shown in FIG. The line-of-sight direction of the virtual camera 312 is defined as a direction V that passes through the center of the virtual touch panel 311 and is perpendicular to the virtual touch panel 311 . Also, it is defined that the reference point for determining the virtual three-dimensional data of the virtual camera 312 is on the reference plane B. FIG. Let S be the distance between the reference point of the real camera 10 and the specific point 313 when the intersection of the directions E and V is the specific point 313 .
 パラメータ取得部213が管理者の入力により取得するパラメータは、図3,4に示した、実カメラ10の視線方向を定義するための角度θと、仮想タッチパネル311の位置を定義するための距離Pzと、実カメラ10から仮想カメラ312への回転の中心となる特定点313を定義するための距離Sと、を含む。管理者は、実カメラ10の性能も考慮してパラメータを決定する。これらのパラメータを設定することにより、3次元データから仮想3次元データへの変換方法を特定することができる。 The parameters that the parameter acquisition unit 213 acquires by the administrator's input are the angle θ for defining the line-of-sight direction of the real camera 10 and the distance Pz for defining the position of the virtual touch panel 311 shown in FIGS. and a distance S for defining a particular point 313 about which the rotation from the real camera 10 to the virtual camera 312 is centered. The administrator also considers the performance of the real camera 10 and determines the parameters. By setting these parameters, it is possible to specify a conversion method from three-dimensional data to virtual three-dimensional data.
 言い換えると、実カメラ10の視線の方向Eに延びた第1直線上の点であって、実カメラ10の基準点から距離S(第1距離)にある特定点313を中心として、第1直線を角度θだけ回転した第2直線の方向Vを視線方向とする仮想カメラ312を想定する。そして、実カメラ10による座標空間の3次元データを、仮想カメラ312による座標空間の仮想3次元データに座標変換する。 In other words, a point on the first straight line extending in the line-of-sight direction E of the real camera 10, and centering on the specific point 313 located at a distance S (first distance) from the reference point of the real camera 10, the first straight line A virtual camera 312 is assumed to have a line-of-sight direction V of a second straight line rotated by an angle θ. Then, the three-dimensional data in the coordinate space of the real camera 10 is coordinate-transformed into virtual three-dimensional data in the coordinate space of the virtual camera 312 .
 仮想タッチパネル311は、仮想カメラ312の基準点から第2直線に平行な方向にPz(第2距離)離れている。仮想タッチパネル311の形状は、表示手段26の表示面に対応できれば任意の形状でよい。以下の説明において、説明の簡易化のために、表示手段26の表示面が矩形状であり、仮想タッチパネル311も矩形状である場合について説明する。この場合、第2直線は仮想タッチパネル311の矩形状の中心点を通り、仮想タッチパネル311は第2直線に垂直な方向に延在する。第2直線は、仮想カメラ312の基準点と仮想タッチパネル311の中心を通っている。 The virtual touch panel 311 is separated from the reference point of the virtual camera 312 by Pz (second distance) in a direction parallel to the second straight line. The shape of the virtual touch panel 311 may be any shape as long as it can correspond to the display surface of the display means 26 . In the following description, for simplification of description, a case where the display surface of the display unit 26 is rectangular and the virtual touch panel 311 is also rectangular will be described. In this case, the second straight line passes through the rectangular center point of the virtual touch panel 311, and the virtual touch panel 311 extends in a direction perpendicular to the second straight line. The second straight line passes through the reference point of virtual camera 312 and the center of virtual touch panel 311 .
 座標変換部214が行う座標変換について説明する。実カメラ10の出力データに基づいて3次元データ生成部212が生成した3次元データは、図4に示すように、実カメラ10の視線方向Eをz軸とし、仮想タッチパネルの横方向に平行な方向をx軸とする座標(x,y,z)で表される。この実カメラ10による座標空間を、特定点313を通りx軸に平行な直線を回転軸として回転すると、仮想カメラ312の視線方向Vをz’軸とし、仮想タッチパネルの横方向をx軸とする座標(x,y’,z’)で表される座標空間が得られる。 The coordinate conversion performed by the coordinate conversion unit 214 will be described. The three-dimensional data generated by the three-dimensional data generation unit 212 based on the output data of the real camera 10 is, as shown in FIG. It is represented by coordinates (x, y, z) with the direction as the x-axis. When the coordinate space of the real camera 10 is rotated around a straight line passing through the specific point 313 and parallel to the x-axis, the line-of-sight direction V of the virtual camera 312 is taken as the z'-axis, and the horizontal direction of the virtual touch panel is taken as the x-axis. A coordinate space is obtained, represented by the coordinates (x, y', z').
 ここで、実カメラ10を回転した際には、カメラ位置は、基準面Bより後方のB’にあるため、方向Vに沿って基準面Bまでカメラ位置を平行移動させる。平行移動の距離Dzは、S-Scosθである。つまり、仮想カメラ312による座標空間の仮想3次元データは、(x,y’,z’)から、z’軸方向にDz(Dz=S-Scosθ)移動した座標(x,y’,z”)で表される。このようにして、実カメラ10の出力データに基づいて(x,y,z)で表される座標空間の3次元データを、回転及び平行移動により、仮想カメラ312による(x,y’,z”)で表される座標空間の仮想3次元データに変換することができる。 Here, when the real camera 10 is rotated, the camera position is located at B' behind the reference plane B, so the camera position is translated along the direction V to the reference plane B. The translation distance Dz is S-Scos θ. In other words, the virtual three-dimensional data in the coordinate space by the virtual camera 312 is the coordinates (x, y', z'') shifted from (x, y', z') by Dz (Dz=S-Scos θ) in the z'-axis direction. ) In this way, the three-dimensional data in the coordinate space represented by (x, y, z) based on the output data of the real camera 10 is rotated and translated by the virtual camera 312 ( x, y', z'') can be converted into virtual three-dimensional data in a coordinate space.
 指示体検出部215は、座標変換部214が変換した仮想3次元データに基づいて、仮想カメラ312に最も近い点を指示体321の先端として検出する。指示体321の先端が、オペレータの指示位置322である。指示体検出部215は、仮想3次元データに基づいて、指示位置322の仮想3次元座標を出力する。 The pointer detection unit 215 detects the point closest to the virtual camera 312 as the tip of the pointer 321 based on the virtual three-dimensional data converted by the coordinate conversion unit 214 . The tip of the indicator 321 is the pointing position 322 of the operator. The pointer detection unit 215 outputs virtual three-dimensional coordinates of the pointing position 322 based on the virtual three-dimensional data.
 接触判定部216は、指示体検出部215が出力する指示位置322の仮想3次元座標に基づいて、仮想カメラ312の基準点から指示位置322までのz”方向(第2直線に沿った方向V)の距離Vz(第3距離)を算出する。そして、接触判定部216は、距離Vzを、仮想カメラ312から仮想タッチパネルまでのz”方向の距離Pz(第2距離)と比較する。 The contact determination unit 216 determines the z″ direction (the direction V ), and the contact determination unit 216 compares the distance Vz with the distance Pz (second distance) in the z″ direction from the virtual camera 312 to the virtual touch panel.
 言い換えると、接触判定部216は、仮想カメラ312の基準点を通る基準面Bから指示体321の指示位置322までの最短距離である距離Vzと、基準面Bから仮想タッチパネルまでの距離Pzと、を比較する。図4に示すように、Vz≦Pzのとき、指示体321の指示位置322は、仮想タッチパネル311と同じ位置又は仮想タッチパネル311よりも仮想カメラ312に近いと言える。このとき、指示体321による仮想タッチパネル311への接触があったと判定する。Vz>Pzの場合には、指示体321による仮想タッチパネル311への入力はないと判定する。 In other words, the contact determination unit 216 determines the distance Vz, which is the shortest distance from the reference plane B passing through the reference point of the virtual camera 312 to the pointing position 322 of the indicator 321, the distance Pz from the reference plane B to the virtual touch panel, compare. As shown in FIG. 4 , when Vz≦Pz, the indicated position 322 of the pointer 321 can be said to be at the same position as the virtual touch panel 311 or closer to the virtual camera 312 than the virtual touch panel 311 . At this time, it is determined that the pointer 321 touches the virtual touch panel 311 . If Vz>Pz, it is determined that there is no input from the pointer 321 to the virtual touch panel 311 .
 操作入力信号生成部217は、接触判定部216が指示位置322である指示体321の先端による接触があったと判定した場合に、その時の指示位置322の仮想3次元座標に基づいて、オペレータの操作が示す操作入力信号を生成する。 When the contact determination unit 216 determines that the tip of the pointer 321 that is the pointing position 322 has touched, the operation input signal generating unit 217 detects the operator's operation based on the virtual three-dimensional coordinates of the pointing position 322 at that time. generates an operation input signal indicated by .
 具体的には、操作入力信号生成部217は、表示手段26の表示画面の、指示位置322の仮想3次元座標に対応する位置にカーソルを表示させる操作入力信号を生成する。 Specifically, the operation input signal generation unit 217 generates an operation input signal for displaying a cursor at a position corresponding to the virtual three-dimensional coordinates of the pointing position 322 on the display screen of the display means 26 .
 その後、操作入力信号生成部217は、接触判定部216が出力する指示位置322の時間変化の情報に応じて、カーソルを動かし、選択、移動等を指示する操作入力信号を生成する。また、予め、任意のアプリケーションがインストールされた、操作入力装置20を含む情報処理端末においては、表示手段26に表示されたアイコンが示すアプリケーションの実行等を指示する操作入力信号を生成する。 After that, the operation input signal generation unit 217 generates an operation input signal for moving the cursor, selecting, moving, etc. according to the information on the temporal change of the pointing position 322 output by the contact determination unit 216 . Further, in an information processing terminal including the operation input device 20 in which an arbitrary application is installed in advance, an operation input signal for instructing execution of an application indicated by an icon displayed on the display means 26 is generated.
 このように構成された操作入力装置20の操作入力処理について、図5に示すフローチャートに沿って説明する。図5は、操作入力処理を示すフローチャートである。操作入力処理は、操作入力システム1の管理者が操作入力プログラムを実行したときにスタートする。 The operation input processing of the operation input device 20 configured in this way will be described along the flowchart shown in FIG. FIG. 5 is a flowchart showing operation input processing. The operation input process starts when the administrator of the operation input system 1 executes the operation input program.
 まず、パラメータ取得部213がパラメータ入力画面を表示手段26に表示する。管理者は仮想タッチパネル311の位置及び向きを仮想し、仮想タッチパネル311を特定するためのパラメータを入力する。パラメータ取得部213は、管理者が予め入力したパラメータを取得する(ステップS101) First, the parameter acquisition unit 213 displays a parameter input screen on the display means 26. The administrator hypothesizes the position and orientation of the virtual touch panel 311 and inputs parameters for specifying the virtual touch panel 311 . The parameter acquisition unit 213 acquires parameters input in advance by the administrator (step S101).
 パラメータ取得部213が取得する第1のパラメータは、仮想タッチパネル311の延在面Pに垂直の方向Vに対する実カメラ10の視線方向Eの角度θである。第2のパラメータは、仮想タッチパネル311の中心を通り方向Vに延びた第2直線と、実カメラ10の視線方向Eに延びた第1直線と、の交点を特定点313としたときの、実カメラ10の基準点から特定点313までの距離Sである。第3のパラメータは、実カメラ10の基準点を通り、仮想タッチパネル311に平行な面を基準面Bとしたときの、基準面Bから仮想タッチパネル311までの距離Pzである。パラメータ取得部213が取得するパラメータは、仮想タッチパネル311を特定するための他のパラメータを含んでもよい。 The first parameter acquired by the parameter acquisition unit 213 is the angle θ of the line-of-sight direction E of the real camera 10 with respect to the direction V perpendicular to the extension plane P of the virtual touch panel 311 . The second parameter is a value obtained when a specific point 313 is an intersection of a second straight line passing through the center of the virtual touch panel 311 and extending in the direction V and a first straight line extending in the line-of-sight direction E of the real camera 10 . It is the distance S from the reference point of the camera 10 to the specific point 313 . The third parameter is the distance Pz from the reference plane B to the virtual touch panel 311 when the plane passing through the reference point of the real camera 10 and parallel to the virtual touch panel 311 is taken as the reference plane B. FIG. The parameters acquired by parameter acquisition unit 213 may include other parameters for specifying virtual touch panel 311 .
 パラメータ取得部213が取得したパラメータにより、座標変換部214は、実カメラ10による座標空間の3次元データから、仮想カメラ312による座標空間の仮想3次元データへの座標変換方法を決定する(ステップS102)。仮想カメラ312は、特定点313を中心として実カメラ10を角度θだけ回転し、方向Vに沿って基準面Bまで平行移動した位置に仮想するカメラである。 Based on the parameters acquired by the parameter acquisition unit 213, the coordinate transformation unit 214 determines a method of coordinate transformation from the three-dimensional data of the coordinate space of the real camera 10 to the virtual three-dimensional data of the coordinate space of the virtual camera 312 (step S102). ). The virtual camera 312 is a camera virtualized at a position where the real camera 10 is rotated by an angle θ around the specific point 313 and translated along the direction V to the reference plane B. FIG.
 図4に示した例においては、座標変換部214は、実カメラ10による座標空間を、特定点313を中心として角度θだけ回転し方向Vに沿って長さDz(Dz=S-Scosθ)だけ平行移動した、仮想カメラ312による座標空間を想定し、実カメラ10による座標空間の3次元データを仮想カメラ312による座標空間の仮想3次元データに変換する座標変換方法を決定する(ステップS102)。 In the example shown in FIG. 4, the coordinate transformation unit 214 rotates the coordinate space of the real camera 10 by an angle θ around the specific point 313 and transforms the coordinate space along the direction V by a length Dz (Dz=S−Scos θ). Assuming a parallel coordinate space of the virtual camera 312, a coordinate transformation method for transforming the three-dimensional data of the coordinate space of the real camera 10 into virtual three-dimensional data of the coordinate space of the virtual camera 312 is determined (step S102).
 データ取得部211は、実カメラ10の出力データを取得する(ステップS103)。3次元データ生成部212は、実カメラ10の出力データを3次元座標空間に展開し、3次元データを生成する。(ステップS104:3次元データ生成ステップ)。 The data acquisition unit 211 acquires the output data of the real camera 10 (step S103). The three-dimensional data generator 212 develops the output data of the real camera 10 in a three-dimensional coordinate space to generate three-dimensional data. (Step S104: 3D data generation step).
 その後、座標変換部214が、ステップS102で決定した座標変換方法を用いて、実カメラ10による座標空間の3次元データを仮想カメラ312による座標空間の仮想3次元データに変換する(ステップS105:座標変換ステップ)。 Thereafter, the coordinate transformation unit 214 transforms the three-dimensional data in the coordinate space of the real camera 10 into virtual three-dimensional data in the coordinate space of the virtual camera 312 using the coordinate transformation method determined in step S102 (step S105: coordinates conversion step).
 次に、指示体検出部215が、仮想3次元データに基づいて、仮想カメラ312から最も近い点を、指示体321の指示位置322として検出し、指示位置322の仮想3次元座標を算出する。そして、接触判定部216は、指示位置322の仮想3次元座標に基づいて、基準面Bから指示位置322までの、最短距離Vzを算出する(ステップS106)。接触判定部216は、基準面Bから仮想タッチパネル311までの距離Pzと指示位置322までの距離Vzとを比較し(ステップS107)、距離Vzが距離Pz以下であるか否かを判定する(ステップS108:接触判定ステップ)。 Next, the pointer detection unit 215 detects the point closest to the virtual camera 312 as the pointed position 322 of the pointer 321 based on the virtual three-dimensional data, and calculates the virtual three-dimensional coordinates of the pointed position 322 . Then, the contact determination unit 216 calculates the shortest distance Vz from the reference plane B to the indicated position 322 based on the virtual three-dimensional coordinates of the indicated position 322 (step S106). The contact determination unit 216 compares the distance Pz from the reference plane B to the virtual touch panel 311 with the distance Vz to the indicated position 322 (step S107), and determines whether or not the distance Vz is equal to or less than the distance Pz (step S108: contact determination step).
 距離Vzが距離Pz以下である場合には(ステップS108:Yes)、ステップS106で求めた指示位置322の仮想3次元座標に基づいて、仮想タッチパネル311上の座標を算出し、操作入力信号生成部217に対して出力する(ステップS109)。距離Vzが距離Pzより大きい場合には(ステップS108:No)、ステップS103に戻る。 If the distance Vz is equal to or less than the distance Pz (step S108: Yes), the coordinates on the virtual touch panel 311 are calculated based on the virtual three-dimensional coordinates of the pointing position 322 obtained in step S106, and the operation input signal generation unit 217 (step S109). If the distance Vz is greater than the distance Pz (step S108: No), the process returns to step S103.
 ステップS109で求めた仮想タッチパネル311上の座標を出力した後は、管理者による操作入力処理の終了の命令があった場合には(ステップS110:Yes)、処理を終了する。終了の命令がない場合には(ステップS110:No)、ステップS103に戻る。 After outputting the coordinates on the virtual touch panel 311 obtained in step S109, if the administrator issues an instruction to end the operation input process (step S110: Yes), the process ends. If there is no end command (step S110: No), the process returns to step S103.
 操作入力信号生成部217は、ステップS109で求めた仮想タッチパネル311上の座標の時間変化に基づいて操作入力信号を生成し(操作入力信号生成ステップ)、出力する。操作入力信号は、カーソルを動かし、選択、移動等を指示する信号、又は、予めインストールされたアプリケーションの実行等を指示する信号である。 The operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel 311 obtained in step S109 (operation input signal generation step) and outputs it. The operation input signal is a signal for instructing selection, movement, or the like by moving a cursor, or a signal for instructing execution of an application installed in advance.
 このようにして、操作入力装置20は、実カメラ10の出力データに基づく3次元データを、仮想カメラ312による仮想3次元データに変換し、指示位置322と仮想タッチパネル311の位置とを比較することにより、仮想タッチパネル311への指示体321のタッチ操作を検出する。 In this way, the operation input device 20 converts the three-dimensional data based on the output data of the real camera 10 into virtual three-dimensional data by the virtual camera 312, and compares the pointing position 322 with the position of the virtual touch panel 311. , the touch operation of the pointer 321 on the virtual touch panel 311 is detected.
 以上説明したように、本実施の形態1において、実カメラ10は、視線方向を指示体321の指示方向に対して傾斜した方向に向けて、表示手段26に対して固定される。操作入力装置20の3次元データ生成部212は、実カメラ10の出力データに基づいて、3次元データを生成する。座標変換部214は、実カメラ10の視線方向Eを特定点313を中心として角度θだけ回転した方向Vを視線方向とする仮想カメラ312を想定し、実カメラ10による3次元データを、仮想カメラ312による仮想3次元データに変換する。そして、指示体検出部215が、仮想3次元データに基づいて、仮想カメラ312から最も近い点を指示位置322として検出し、接触判定部216が、仮想カメラ312から方向Vに距離Pzだけ離れた面を仮想タッチパネル311として想定し、方向Vにおいて仮想カメラ312から指示位置322までの距離Vzと、仮想カメラ312から仮想タッチパネル311までの距離をPzと、を比較し、距離Vzが距離Pz以下であるときに、仮想タッチパネル311に指示体321が接触したことを判定するとした。これにより、実カメラ10を、視線方向を傾けて設置した簡易な構成で非接触の操作入力を正確に検出することが可能となる。 As described above, in Embodiment 1, the real camera 10 is fixed with respect to the display means 26 with the line-of-sight direction directed in a direction inclined with respect to the pointing direction of the indicator 321 . A three-dimensional data generation unit 212 of the operation input device 20 generates three-dimensional data based on the output data of the real camera 10 . The coordinate transformation unit 214 assumes a virtual camera 312 whose line-of-sight direction is a direction V obtained by rotating the line-of-sight direction E of the real camera 10 by an angle θ about a specific point 313, and converts three-dimensional data from the real camera 10 into virtual camera 312. 312 into virtual three-dimensional data. Then, based on the virtual three-dimensional data, the pointer detection unit 215 detects the point closest to the virtual camera 312 as the pointing position 322, and the contact determination unit 216 moves away from the virtual camera 312 in the direction V by the distance Pz. Assuming that the surface is the virtual touch panel 311, the distance Vz from the virtual camera 312 to the pointing position 322 in the direction V is compared with the distance Pz from the virtual camera 312 to the virtual touch panel 311, and if the distance Vz is the distance Pz or less, It is determined that the pointer 321 touches the virtual touch panel 311 at some point. This makes it possible to accurately detect non-contact operation input with a simple configuration in which the real camera 10 is installed with the line-of-sight direction inclined.
 なお、本実施の形態1において、実カメラ10は、表示手段26の中央上端部に設置する場合について説明したが、図6に示すように、仮想タッチパネル311の外周のどこに実カメラ10を設置してもよい。座標変換部214が座標変換方法を決定する際に、実カメラ10から仮想カメラ312への回転の方向を変えることで、他の場所の実カメラ10による3次元データも、仮想カメラ312による仮想3次元データに変換することができる。 In the first embodiment, the case where the real camera 10 is installed at the central upper end portion of the display means 26 has been described, but as shown in FIG. may When the coordinate transformation unit 214 determines the coordinate transformation method, by changing the direction of rotation from the real camera 10 to the virtual camera 312 , the 3D data from the real camera 10 at another location can also be converted to the virtual 3D data from the virtual camera 312 . It can be transformed into dimensional data.
 中央上端部以外の他の位置に実カメラ10を設置した場合であっても、実カメラ10の視線方向は、指示体321の指示方向に対して傾斜した方向である。上記実施の形態1では、実カメラ10から仮想カメラ312へ、x軸に平行な直線を回転軸とする回転角度θを設定し、この回転で表される座標変換をするとした。これに対し、例えば、図6の実カメラ11の位置にある場合は、仮想タッチパネル311の縦方向に平行な座標軸をy軸としたとき、特定点313を通りy軸に平行な直線を回転軸とする回転角度を設定し、この回転で表される座標変換をすればよい。また、実カメラ12の位置にある場合は、x軸に平行な直線を回転軸とする回転角度及びy軸に平行な直線を回転軸とする回転角度を設定し、x軸に平行な直線を回転軸とする回転及びy軸に平行な直線を回転軸とする回転で表される座標変換をすればよい。 Even if the real camera 10 is installed at a position other than the central upper end, the line-of-sight direction of the real camera 10 is inclined with respect to the pointing direction of the indicator 321 . In the first embodiment described above, the rotation angle θ is set from the real camera 10 to the virtual camera 312 with a straight line parallel to the x-axis as the rotation axis, and the coordinate transformation represented by this rotation is performed. On the other hand, for example, when the position of the real camera 11 in FIG. , and coordinate transformation represented by this rotation is performed. In addition, when it is at the position of the real camera 12, a rotation angle with a straight line parallel to the x-axis as a rotation axis and a rotation angle with a straight line parallel to the y-axis as a rotation axis are set, and a straight line parallel to the x-axis is set. Coordinate transformation represented by rotation about a rotation axis and rotation about a straight line parallel to the y-axis may be performed.
 つまり、実カメラ10から仮想カメラ312までの回転は、特定点313を通り仮想タッチパネル311の延在方向に平行な直線を軸とする回転であり、座標変換部214は、それぞれの軸を回転軸とする回転で表される座標変換を行う。これにより、実カメラ10の設置位置によらず、仮想タッチパネル311への仮想タッチ操作を検出することができる。 That is, the rotation from the real camera 10 to the virtual camera 312 is rotation about a straight line passing through the specific point 313 and parallel to the extending direction of the virtual touch panel 311. Performs coordinate transformation represented by a rotation of . Thereby, a virtual touch operation on the virtual touch panel 311 can be detected regardless of the installation position of the real camera 10 .
 また、実カメラ10の位置は、指示体321の指示方向に対して傾斜した方向から指示体321を撮影することができれば、仮想タッチパネル311の外周のみならず、仮想タッチパネル311に向かって操作するオペレータの上下左右又は後方でもよい。この場合は、角度θを90°以上の値に設定することで、適宜座標変換することができ、仮想タッチパネル311への仮想タッチ操作を検出することができる In addition, if the position of the real camera 10 can capture the pointer 321 from a direction that is inclined with respect to the pointing direction of the pointer 321, the operator operating toward the virtual touch panel 311 as well as the outer periphery of the virtual touch panel 311 It may be up, down, left, right, or behind. In this case, by setting the angle θ to a value of 90° or more, coordinate conversion can be performed as appropriate, and a virtual touch operation on the virtual touch panel 311 can be detected.
(実施の形態2)
 本発明の実施の形態2について図7を参照して詳細に説明する。図7は、本実施の形態2における表示手段26、実カメラ10、仮想タッチパネル311及びホバー面315を側方から見た概要図である。
(Embodiment 2)
Embodiment 2 of the present invention will be described in detail with reference to FIG. FIG. 7 is a schematic side view of the display unit 26, the real camera 10, the virtual touch panel 311, and the hover surface 315 according to the second embodiment.
 本実施の形態2に係る操作入力システム1は、実施の形態1と同様の構成を有し、同様の操作入力処理を実行するが、仮想タッチパネル311よりも、表示手段26からさらに離れた位置にホバー面315を仮想する点が異なる。 The operation input system 1 according to the second embodiment has the same configuration as that of the first embodiment, and executes the same operation input process. The difference is that the hover plane 315 is assumed.
 ホバー面315は、仮想タッチパネル311に平行な面であって、仮想タッチパネル311から予め定めた距離Hzだけ離れて位置する。仮想タッチパネル311とホバー面315との距離Hzは、管理者により設定可能であり、例えば、5cmから10cmの値である。 The hover surface 315 is a surface parallel to the virtual touch panel 311 and is located at a predetermined distance Hz from the virtual touch panel 311 . The distance Hz between the virtual touch panel 311 and the hover surface 315 can be set by the administrator, and is a value between 5 cm and 10 cm, for example.
 データ取得部211、3次元データ生成部212、座標変換部214、指示体検出部215の機能及び動作は、実施の形態1と同様である。パラメータ取得部213は、実施の形態1におけるパラメータの角度θ、距離S及び距離Pzに加えて、仮想タッチパネル311からホバー面315までの距離Hzを取得する。 The functions and operations of the data acquisition unit 211, the three-dimensional data generation unit 212, the coordinate conversion unit 214, and the pointer detection unit 215 are the same as in the first embodiment. The parameter acquisition unit 213 acquires the distance Hz from the virtual touch panel 311 to the hover plane 315 in addition to the parameter angle θ, the distance S, and the distance Pz in the first embodiment.
 接触判定部216は、基準面Bから指示位置322までの距離Vzについて、仮想タッチパネル311までの距離Pzとの比較より前に、基準面Bからホバー面315までの距離(第4距離:(Pz+Hz))との比較を行う。指示位置322が近づいてきて、ホバー面315を通過することにより、距離Vzが距離(Pz+Hz)以下になった場合に、操作入力信号生成部217が仮想タッチパネル311に近づいていることを表す表示を行う操作信号を生成する。 Before comparing the distance Vz from the reference plane B to the indicated position 322 with the distance Pz to the virtual touch panel 311, the contact determination unit 216 determines the distance from the reference plane B to the hover plane 315 (fourth distance: (Pz+Hz )). When the indicated position 322 approaches and passes through the hover surface 315 and the distance Vz becomes equal to or less than the distance (Pz+Hz), the operation input signal generation unit 217 displays a display indicating that the virtual touch panel 311 is approaching. Generates an operation signal to perform.
 例えば、操作入力信号生成部217は、距離Vzが距離(Pz+Hz)以下になった場合に、表示手段26にカーソルを表示させてもよい。これにより、オペレータは現在ポイントしている位置を認識することができる。さらに、カーソルの大きさ又は形状を変化させて仮想タッチパネル311までの距離を認識できる表示を行ってもよい。例えば、指示位置322がホバー面315を超えて仮想タッチパネル311まで近づくにつれ、カーソルを段階的に小さくしてもよく、又は、色を段階的に濃くしてもよい。 For example, the operation input signal generator 217 may cause the display means 26 to display a cursor when the distance Vz becomes equal to or less than the distance (Pz+Hz). This allows the operator to recognize the current pointing position. Further, the size or shape of the cursor may be changed to display the distance to the virtual touch panel 311 recognizable. For example, as the pointing position 322 crosses over the hover surface 315 and approaches the virtual touch panel 311, the cursor may be made smaller step by step, or the color may be made darker step by step.
 従来のタッチパネルでは、オペレータが自身の視覚によりタッチパネルとの距離を認識できるが、空間に仮想する仮想タッチパネル311の場合には、仮想タッチパネル311までの距離を視覚で認識できない。これに対し、本実施の形態2の仮想タッチパネル311の手前にホバー面315を設けた構成は、オペレータが仮想タッチパネル311をタッチする前に、指示位置322が仮想タッチパネル311に近づいてきてホバー面315を超えた時点でカーソルのある領域及び仮想タッチパネルまでの距離を認識することができる。また、仮想タッチパネル311がタッチした際の位置の誤差を小さくすることができる。これにより空間における操作性を上げることができる。 With a conventional touch panel, the operator can visually recognize the distance to the touch panel, but in the case of the virtual touch panel 311 in space, the distance to the virtual touch panel 311 cannot be visually recognized. In contrast, in the configuration in which the hover surface 315 is provided in front of the virtual touch panel 311 according to the second embodiment, the instruction position 322 approaches the virtual touch panel 311 and the hover surface 315 is displayed before the operator touches the virtual touch panel 311 . , the area where the cursor is located and the distance to the virtual touch panel can be recognized. Also, the positional error when the virtual touch panel 311 is touched can be reduced. Thereby, operability in space can be improved.
 以上説明したように、本実施の形態2においては、仮想タッチパネル311に対して表示手段26の表示面と反対の方向に離隔して配置したホバー面315を仮想し、接触判定部216が、指示位置322の仮想タッチパネル311への接触の前に、ホバー面315の通過を判定する。そして、操作入力信号生成部217が、指示位置322が仮想タッチパネル311に近づいていることを表す表示を行う操作信号を生成することとした。これにより、指示位置322の存する領域及び仮想タッチパネル311までの距離をオペレータに認識させることが可能となる。 As described above, in the second embodiment, the hover surface 315 is assumed to be spaced apart from the virtual touch panel 311 in the direction opposite to the display surface of the display means 26, and the contact determination unit 216 instructs Passage of hover surface 315 is determined prior to touching virtual touch panel 311 at location 322 . Then, the operation input signal generation unit 217 generates an operation signal for displaying that the pointed position 322 is approaching the virtual touch panel 311 . This makes it possible for the operator to recognize the area where the pointed position 322 exists and the distance to the virtual touch panel 311 .
(実施の形態3)
 本発明の実施の形態3について図8,9を参照して詳細に説明する。図8は、本実施の形態3における表示手段26、実カメラ10、仮想タッチパネル311及び指示有効領域330の概要構成を示す斜視図である。図9は、本実施の形態3における操作入力処理のフローチャートである。
(Embodiment 3)
Embodiment 3 of the present invention will be described in detail with reference to FIGS. FIG. 8 is a perspective view showing a schematic configuration of display means 26, real camera 10, virtual touch panel 311, and instruction effective area 330 according to the third embodiment. FIG. 9 is a flowchart of operation input processing according to the third embodiment.
 本実施の形態3に係る操作入力システム1は、実施の形態1,2と同様の構成を有するが、パラメータ取得部213と指示体検出部215の機能が一部異なる。本実施の形態3に係る操作入力システム1において、操作入力装置20は、指示体321による指示が有効である指示有効領域330を仮想し、指示有効領域330の内側における指示体321の検出のみを有効とし、指示有効領域330の外側において指示体321の検出を行わない。 The operation input system 1 according to Embodiment 3 has the same configuration as Embodiments 1 and 2, but the functions of the parameter acquisition section 213 and the pointer detection section 215 are partially different. In the operation input system 1 according to the third embodiment, the operation input device 20 assumes an instruction effective area 330 in which an instruction by the indicator 321 is valid, and detects only the indicator 321 inside the instruction effective area 330. It is valid, and the indicator 321 is not detected outside the indicator effective area 330 .
 指示有効領域330は、図8に示すように、表示手段26の手前の空間領域から、限定した領域である。指示有効領域330の境界の形状は任意であり、例えば、図8に示すように、仮想タッチパネル311に垂直で、かつ隣り合う面が互いに直交する4面を含む直方体である。その他、仮想タッチパネル311に垂直な面が側面となる円筒形又は楕円筒形であってもよく、仮想タッチパネル311から離れるに従い領域が広がり又は狭まる形状であってもよい。本実施の形態では、指示有効領域330の境界が直方体である場合について説明する。 The instruction effective area 330 is an area limited from the space area in front of the display means 26, as shown in FIG. The shape of the boundary of the instruction effective area 330 is arbitrary, and, for example, as shown in FIG. In addition, it may have a cylindrical shape or an elliptical cylindrical shape whose side surface is perpendicular to the virtual touch panel 311 , or a shape in which the area expands or narrows as the distance from the virtual touch panel 311 increases or decreases. In the present embodiment, a case where the boundary of instruction effective area 330 is a rectangular parallelepiped will be described.
 なお、指示有効領域330内に実施の形態2と同様のホバー面315を更に有してもよく、指示体321の指示位置322の仮想タッチパネル311へ接触の前に、ホバー面315の通過を判定するようにしてもよい。 Note that a hover surface 315 similar to that in the second embodiment may be further included in the instruction effective area 330, and it is determined whether or not the pointer 321 has passed through the hover surface 315 before the pointing position 322 of the pointer 321 touches the virtual touch panel 311. You may make it
 パラメータ取得部213は、仮想タッチパネル311を特定するためのパラメータに加えて、指示有効領域330を特定するためのパラメータを取得する。仮想タッチパネル311を特定するためのパラメータは、実施の形態1と同様の実カメラ10の視線方向Eの角度θと、実カメラ10の基準点から特定点313までの距離Sと、基準面Bから仮想タッチパネル311までの距離Pzである。 The parameter acquisition unit 213 acquires parameters for specifying the instruction effective area 330 in addition to parameters for specifying the virtual touch panel 311 . The parameters for specifying the virtual touch panel 311 are the angle θ of the line-of-sight direction E of the real camera 10 as in Embodiment 1, the distance S from the reference point of the real camera 10 to the specific point 313, and the distance S from the reference plane B It is the distance Pz to the virtual touch panel 311 .
 指示有効領域330を特定するためのパラメータは、仮想カメラ312による座標空間における、指示有効領域330の境界を示す任意のパラメータであり、例えば、指示有効領域330の境界を示す仮想3次元座標値の上限及び下限、又は、指示有効領域330の境界面を表す3次元関数の係数である。パラメータ取得部213が取得するパラメータは、仮想タッチパネル311又は指示有効領域330を特定するための他のパラメータを含んでもよい。 The parameter for specifying the instruction effective area 330 is an arbitrary parameter indicating the boundary of the instruction effective area 330 in the coordinate space of the virtual camera 312. For example, the virtual three-dimensional coordinate value indicating the boundary of the instruction effective area 330 is The upper and lower bounds or coefficients of a three-dimensional function representing the bounding surface of the indication effective area 330 . The parameters acquired by parameter acquisition section 213 may include other parameters for specifying virtual touch panel 311 or instruction effective area 330 .
 指示体検出部215は、座標変換部214が出力する仮想カメラ312による仮想3次元データのうち、パラメータにより特定された指示有効領域330内の仮想3次元データに基づいて、仮想カメラ312から最も近い点を指示体321の指示位置322として検出する。 The pointer detection unit 215 detects the closest object from the virtual camera 312 based on the virtual three-dimensional data within the instruction effective area 330 specified by the parameter, among the virtual three-dimensional data from the virtual camera 312 output by the coordinate transformation unit 214 . A point is detected as the indicated position 322 of the pointer 321 .
 操作入力装置20の他の構成は実施の形態1,2と同様である。このように構成された操作入力装置20の操作入力処理について、図9に示すフローチャートを用いて説明する。図9において、図5と同じ符号を付している処理は実施の形態1と同様の処理を行う。 Other configurations of the operation input device 20 are the same as in the first and second embodiments. Operation input processing of the operation input device 20 configured in this manner will be described with reference to the flowchart shown in FIG. In FIG. 9, the processes assigned the same reference numerals as those in FIG. 5 are the same as those in the first embodiment.
 まず、パラメータ取得部213がパラメータ入力画面を表示手段26に表示する。管理者は仮想タッチパネル311の位置及び向きを仮想し、仮想タッチパネル311を特定するためのパラメータと、指示有効領域330を特定するためのパラメータを入力する。パラメータ取得部213は、管理者が予め入力したパラメータを取得する(ステップS201)。 First, the parameter acquisition unit 213 displays a parameter input screen on the display means 26. The administrator hypothesizes the position and orientation of the virtual touch panel 311 and inputs parameters for specifying the virtual touch panel 311 and parameters for specifying the instruction effective area 330 . The parameter acquisition unit 213 acquires parameters input in advance by the administrator (step S201).
 パラメータ取得部213が取得する仮想タッチパネル311を特定するためのパラメータは、実施の形態1と同様の実カメラ10の視線方向Eの角度θと、実カメラ10の基準点から特定点313までの距離Sと、基準面Bから仮想タッチパネル311までの距離Pzである。指示有効領域330を特定するためのパラメータは、仮想カメラ312による座標空間における、指示有効領域330の境界を示すパラメータである。パラメータ取得部213が取得するパラメータは、仮想タッチパネル311又は指示有効領域330を特定するための他のパラメータを含んでもよい。 The parameters for specifying the virtual touch panel 311 acquired by the parameter acquisition unit 213 are the angle θ of the line-of-sight direction E of the real camera 10 as in Embodiment 1, and the distance from the reference point of the real camera 10 to the specific point 313. S and a distance Pz from the reference plane B to the virtual touch panel 311 . A parameter for specifying the instruction effective area 330 is a parameter indicating the boundary of the instruction effective area 330 in the coordinate space of the virtual camera 312 . The parameters acquired by parameter acquisition section 213 may include other parameters for specifying virtual touch panel 311 or instruction effective area 330 .
 次に、パラメータ取得部213が取得したパラメータを用いて、座標変換部214は、実カメラ10による座標空間の3次元データから、仮想カメラ312による座標空間の仮想3次元データへの座標変換方法を決定する(ステップS102)。データ取得部211は、実カメラ10の出力データを取得する(ステップS103)。3次元データ生成部212は、実カメラ10の出力データを3次元座標空間に展開し、3次元データを生成する。(ステップS104:3次元データ生成ステップ)。 Next, using the parameters acquired by the parameter acquisition unit 213, the coordinate transformation unit 214 determines a method of coordinate transformation from the three-dimensional data in the coordinate space of the real camera 10 to the virtual three-dimensional data in the coordinate space of the virtual camera 312. Determine (step S102). The data acquisition unit 211 acquires the output data of the real camera 10 (step S103). The three-dimensional data generator 212 develops the output data of the real camera 10 in a three-dimensional coordinate space to generate three-dimensional data. (Step S104: 3D data generation step).
 その後、座標変換部214が、ステップS102で決定した座標変換方法を用いて、実カメラ10による座標空間の3次元データを仮想カメラ312による座標空間の仮想3次元データに変換する(ステップS105:座標変換ステップ)。なお、ステップS102~S105の詳細の処理は実施の形態1と同様である。 Thereafter, the coordinate transformation unit 214 transforms the three-dimensional data in the coordinate space of the real camera 10 into virtual three-dimensional data in the coordinate space of the virtual camera 312 using the coordinate transformation method determined in step S102 (step S105: coordinates conversion step). The detailed processing of steps S102 to S105 is the same as that of the first embodiment.
 次に、指示体検出部215が、ステップS105で座標変換した仮想3次元データのうち、ステップS201で取得したパラメータに基づいて特定された指示有効領域330の外側のデータを除外する(ステップS202)。 Next, the pointer detection unit 215 excludes the data outside the instruction effective area 330 specified based on the parameters acquired in step S201 from the virtual three-dimensional data coordinate-transformed in step S105 (step S202). .
 その後、指示有効領域330外のデータを除外した仮想3次元データに基づいて、指示体検出部215が、仮想カメラ312から最も近い点を指示体321の指示位置322として検出し、指示位置322の仮想3次元座標を算出する。そして、接触判定部216は、指示位置322の仮想3次元座標に基づいて、基準面Bから指示位置322までの、最短距離Vzを算出する(ステップS106)。 After that, based on the virtual three-dimensional data excluding data outside the instruction effective area 330, the pointer detection unit 215 detects the point closest to the virtual camera 312 as the designated position 322 of the pointer 321, Calculate virtual three-dimensional coordinates. Then, the contact determination unit 216 calculates the shortest distance Vz from the reference plane B to the indicated position 322 based on the virtual three-dimensional coordinates of the indicated position 322 (step S106).
 接触判定部216は、基準面Bから仮想タッチパネル311までの距離Pzと指示位置322までの距離Vzとを比較し(ステップS107)、距離Vzが距離Pz以下であるか否かを判定する(ステップS108:接触判定ステップ)。 The contact determination unit 216 compares the distance Pz from the reference plane B to the virtual touch panel 311 with the distance Vz to the indicated position 322 (step S107), and determines whether or not the distance Vz is equal to or less than the distance Pz (step S108: contact determination step).
 距離Vzが距離Pz以下である場合には(ステップS108:Yes)、ステップS106で求めた指示位置322の仮想3次元座標に基づいて、仮想タッチパネル311上の座標を算出し、操作入力信号生成部217に対して出力する(ステップS109)。距離Vzが距離Pzより大きい場合には(ステップS108:No)、ステップS103に戻る。 If the distance Vz is equal to or less than the distance Pz (step S108: Yes), the coordinates on the virtual touch panel 311 are calculated based on the virtual three-dimensional coordinates of the pointing position 322 obtained in step S106, and the operation input signal generation unit 217 (step S109). If the distance Vz is greater than the distance Pz (step S108: No), the process returns to step S103.
 ステップS109で求めた仮想タッチパネル311上の座標を出力した後は、管理者による操作入力処理の終了の命令があった場合には(ステップS110:Yes)、処理を終了する。終了の命令がない場合には(ステップS110:No)、ステップS103に戻る。なお、ステップS106~S110の詳細の処理は実施の形態1と同様である。 After outputting the coordinates on the virtual touch panel 311 obtained in step S109, if the administrator issues an instruction to end the operation input process (step S110: Yes), the process ends. If there is no end command (step S110: No), the process returns to step S103. The detailed processing of steps S106 to S110 is the same as that of the first embodiment.
 操作入力信号生成部217は、ステップS109で求めた仮想タッチパネル311上の座標の時間変化に基づいて操作入力信号を生成し(操作入力信号生成ステップ)、出力する。操作入力信号は、カーソルを動かし、選択、移動等を指示する信号、又は、予めインストールされたアプリケーションの実行等を指示する信号である。 The operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel 311 obtained in step S109 (operation input signal generation step) and outputs it. The operation input signal is a signal for instructing selection, movement, or the like by moving a cursor, or a signal for instructing execution of an application installed in advance.
 このようにして、操作入力装置20は、実カメラ10の出力データに基づく3次元データを、仮想カメラ312による仮想3次元データに変換したデータのうち、指示有効領域330内のデータに基づいて検出した、指示位置322と仮想タッチパネル311の位置とを比較することにより、仮想タッチパネル311への指示体321のタッチ操作を検出する。 In this way, the operation input device 20 detects data in the instruction effective area 330 from among the data obtained by converting the three-dimensional data based on the output data of the real camera 10 into the virtual three-dimensional data by the virtual camera 312. By comparing the indicated position 322 and the position of the virtual touch panel 311, the touch operation of the pointer 321 on the virtual touch panel 311 is detected.
 本実施の形態3の効果について説明する。表示手段26には、指示操作の対象が表示手段26の一部のみであり、カード挿入、バーコードリーダ等の他の操作部331を備えるものがある。このような場合、操作入力装置20は、他の操作部331への操作を指示操作として誤検出することを回避しなくてはならない。本実施の形態3によれば、指示有効領域330を設定し、指示有効領域330の内部に存する指示体321の指示位置322のみを検出するため、他の操作部331への操作を、指示操作として誤検出することなく、非接触の操作入力を正確に検出することが可能となる。 The effects of the third embodiment will be explained. In some display means 26, only a part of the display means 26 is targeted for instruction operation, and other operation units 331 such as card insertion and bar code reader are provided. In such a case, the operation input device 20 must avoid erroneously detecting an operation to the other operation unit 331 as an instruction operation. According to the third embodiment, the instruction effective area 330 is set, and only the instruction position 322 of the indicator 321 existing inside the instruction effective area 330 is detected. It is possible to accurately detect a non-contact operation input without erroneously detecting as.
 以上説明したように、本実施の形態3においては、操作入力装置20の指示体検出部215が、仮想カメラ312による仮想3次元データのうちの指示有効領域330内のデータに基づいて、仮想カメラ312から最も近い点を指示位置322として検出し、接触判定部216が、仮想カメラ312から指示位置322までの距離Vzと、仮想カメラ312から仮想タッチパネル311までの距離をPzと、を比較し、距離Vzが距離Pz以下であるときに、仮想タッチパネル311に指示体321が接触したことを判定するとした。これにより、指示有効領域330外での操作者の他の操作を誤って検出することなく、非接触の操作入力を正確に検出することが可能となる。 As described above, in the third embodiment, the pointer detection unit 215 of the operation input device 20 detects the virtual camera 312 based on the data within the instruction effective area 330 of the virtual three-dimensional data obtained by the virtual camera 312 . The point closest to 312 is detected as designated position 322, and contact determination unit 216 compares distance Vz from virtual camera 312 to designated position 322 with distance Pz from virtual camera 312 to virtual touch panel 311, It is determined that the pointer 321 has touched the virtual touch panel 311 when the distance Vz is equal to or less than the distance Pz. This makes it possible to accurately detect a non-contact operation input without erroneously detecting another operation by the operator outside the instruction effective area 330 .
(実施の形態4)
 本発明の実施の形態4について図10を参照して詳細に説明する。図10は、本実施の形態4における実カメラの配置を示す図である。
(Embodiment 4)
Embodiment 4 of the present invention will be described in detail with reference to FIG. FIG. 10 is a diagram showing the arrangement of real cameras according to the fourth embodiment.
 本実施の形態4に係る操作入力システム1は、実施の形態1と同様の操作入力処理を実行するが、実カメラ12,13を含む2以上の実カメラを用いる点が異なる。実カメラの台数は2以上の任意の数でよいが、以下の説明では、実カメラ12,13を備える構成について説明する。 The operation input system 1 according to Embodiment 4 executes the same operation input processing as in Embodiment 1, but differs in that two or more real cameras including real cameras 12 and 13 are used. Although the number of real cameras may be any number of two or more, in the following description, a configuration including real cameras 12 and 13 will be described.
 オペレータが2以上の指示体を用いて操作入力を行う場合がある。例えば、2本の指を同時にタッチパネルに接触し、2本の指の間の距離又は2本の指の移動方向及び移動距離に応じて、様々な操作入力が可能になる。本実施の形態4に係る操作入力装置20は、指示体321が2以上ある場合にも仮想タッチパネル311への操作入力を確実に検出可能にしたものである。 The operator may perform operation input using two or more indicators. For example, by touching the touch panel with two fingers at the same time, various operation inputs are possible depending on the distance between the two fingers or the moving direction and moving distance of the two fingers. The operation input device 20 according to the fourth embodiment can reliably detect an operation input to the virtual touch panel 311 even when there are two or more pointers 321 .
 第1の実カメラ12と第2の実カメラ13は互いに離れた場所に固定されている。例えば、図10に示すように、第1の実カメラ12が表示手段26の右上端部に設置され、第2の実カメラ13が表示手段26の左上端部に設置される。つまり、第1の実カメラ12は仮想タッチパネル311の右上方、第2の実カメラ13は仮想タッチパネル311の左上方に位置する。実カメラ12,13の視線方向は、指示体321の指示方向に対して傾斜した方向であって互いに異なる方向である。操作入力装置20の構成は実施の形態1と同様である。 The first real camera 12 and the second real camera 13 are fixed at locations separated from each other. For example, as shown in FIG. 10 , the first real camera 12 is installed at the upper right end of the display means 26 and the second real camera 13 is installed at the upper left end of the display means 26 . That is, the first real camera 12 is positioned at the upper right side of the virtual touch panel 311 , and the second real camera 13 is positioned at the upper left side of the virtual touch panel 311 . The line-of-sight directions of the real cameras 12 and 13 are directions inclined with respect to the pointing direction of the indicator 321 and different from each other. The configuration of the operation input device 20 is the same as that of the first embodiment.
 操作入力装置20の管理者は、2以上の実カメラ12,13のうちから、マスターカメラを選択し、マスターカメラについてパラメータを設定する。ここでは、実カメラ12がマスターカメラである場合について説明する。パラメータ取得部213は、仮想タッチパネル311の延在面Pに垂直な方向Vに対する、実カメラ12の視線方向Eの、特定点313を通りx軸に平行な直線を軸とする角度θ及び特定点313を通りy軸に平行な直線を軸とする角度θと、実カメラ12から特定点313までの距離Sと、基準面Bから仮想タッチパネル311までの距離Pzと、を取得する。 An administrator of the operation input device 20 selects a master camera from among the two or more real cameras 12 and 13, and sets parameters for the master camera. Here, a case where the real camera 12 is the master camera will be described. The parameter obtaining unit 213 obtains an angle θ x and a specific The angle θy about a straight line passing through the point 313 and parallel to the y -axis, the distance S from the real camera 12 to the specific point 313, and the distance Pz from the reference plane B to the virtual touch panel 311 are obtained.
 3次元データ生成部212は、2以上の実カメラ12,13の出力データ及び2以上の実カメラ12,13の位置情報に基づいて、3次元データを生成する。なお、3次元データ生成部212は、事前に、2以上の実カメラ12,13に対して、単一の3次元データを生成するためのキャリブレーションを実施している。 The 3D data generation unit 212 generates 3D data based on the output data of the two or more real cameras 12 and 13 and the position information of the two or more real cameras 12 and 13 . Note that the three-dimensional data generation unit 212 performs calibration for generating a single three-dimensional data for two or more real cameras 12 and 13 in advance.
 具体的には、3次元データ生成部212は、マスターカメラである実カメラ12の出力データを3次元座標空間に展開し、3次元データを生成する。同様に、3次元データ生成部212はスレーブカメラである実カメラ13の出力データに基づいて3次元データを生成するが、この3次元データは、事前に、実カメラ12と実カメラ13との位置関係及び出力データに基づいてキャリブレーションを実施して補正した3次元データである。3次元データ生成部212は、実カメラ12の出力データに基づく3次元データに対して、実カメラ13の出力データに基づくキャリブレーション済の3次元データを補って単一の3次元データを生成する。 Specifically, the three-dimensional data generation unit 212 expands the output data of the real camera 12, which is the master camera, into a three-dimensional coordinate space to generate three-dimensional data. Similarly, the three-dimensional data generator 212 generates three-dimensional data based on the output data of the real camera 13, which is a slave camera. This is three-dimensional data that has been calibrated and corrected based on the relationship and output data. The three-dimensional data generator 212 supplements the three-dimensional data based on the output data of the real camera 12 with calibrated three-dimensional data based on the output data of the real camera 13 to generate a single three-dimensional data. .
 座標変換部214は、実カメラ12の視線方向Eを、特定点313を中心としてx軸に平行な直線を軸として角度θ、y軸に平行な直線を軸として角度θだけ回転した方向を視線方向Vとし、基準面B上に基準点を有する仮想カメラ312を想定する。そして、座標変換部214は、3次元データ生成部212が生成した実カメラ12,13の出力データに基づく3次元データを、仮想カメラ312による仮想3次元データに変換する座標変換方法を決定する。 The coordinate transformation unit 214 rotates the line-of-sight direction E of the real camera 12 around the specific point 313 by an angle θ x about a straight line parallel to the x-axis and an angle θ y about a straight line parallel to the y-axis. is the line-of-sight direction V, and a virtual camera 312 having a reference point on the reference plane B is assumed. Then, the coordinate transformation unit 214 determines a coordinate transformation method for transforming the three-dimensional data based on the output data of the real cameras 12 and 13 generated by the three-dimensional data generation unit 212 into virtual three-dimensional data by the virtual camera 312 .
 座標変換部214は、3次元データ生成部212が生成した3次元データを、決定した座標変換方法を用いて、仮想3次元データに変換する。 The coordinate transformation unit 214 transforms the three-dimensional data generated by the three-dimensional data generation unit 212 into virtual three-dimensional data using the determined coordinate transformation method.
 接触判定部216は、仮想カメラ312から方向Vに距離Pzだけ離れ、方向Vに垂直な面を仮想タッチパネル311として仮想する。そして、実カメラ12,13による3次元データから座標変換した仮想3次元データに基づいて、方向Vにおける仮想カメラ312から指示体323,324までの距離Vzを算出する。 The contact determination unit 216 hypothesizes a surface that is separated from the virtual camera 312 by a distance Pz in the direction V and that is perpendicular to the direction V as the virtual touch panel 311 . Then, the distance Vz from the virtual camera 312 to the pointers 323 and 324 in the direction V is calculated based on the virtual three-dimensional data coordinate-transformed from the three-dimensional data from the real cameras 12 and 13 .
 ここで、本実施の形態4では、2以上の指示体323,324による操作を検出可能とするために、接触判定部216は、仮想3次元データに基づいて算出した、方向Vにおける仮想カメラ312から指示体323,324までの距離の1以上の極小値を、1以上の距離Vz(第3距離)として検出する。つまり、接触判定部216は、仮想3次元データに基づいて、仮想カメラ312から3次元空間に存する点までの距離をプロットし、極小を示す点を指示体323,324の指示位置として検出する。 Here, in the fourth embodiment, in order to detect operations by two or more pointers 323 and 324, the contact determination unit 216 uses the virtual camera 312 in the direction V calculated based on the virtual three-dimensional data. to the pointers 323 and 324 is detected as one or more distances Vz (third distance). In other words, the contact determination unit 216 plots the distances from the virtual camera 312 to points in the three-dimensional space based on the virtual three-dimensional data, and detects points indicating the minimum distance as indicated positions of the indicators 323 and 324 .
 接触判定部216は、1以上の距離Vzと、仮想カメラ312から仮想タッチパネル311までの距離をPzと、を比較し、距離Vzが距離Pz以下であるときに、仮想タッチパネル311に指示体323、324が接触したことを判定する。 The contact determination unit 216 compares one or more distances Vz with the distance Pz from the virtual camera 312 to the virtual touch panel 311, and when the distance Vz is equal to or less than the distance Pz, the pointer 323, 324 determines that it has touched.
 ここで、2以上の実カメラ12,13を用いた効果について説明する。図10の例において、実カメラ12の出力データのみに基づく3次元データからは、指示体324が指示体323の陰になって検出できない。本実施の形態4においては、設置位置及び視線方向が異なる実カメラ13の出力データにより、3次元データを補っているため、実カメラ12,13の出力データに基づく3次元データによれば、指示体324の指示位置の仮想3次元座標を出力することができる。 Here, the effect of using two or more real cameras 12 and 13 will be described. In the example of FIG. 10, from the three-dimensional data based only on the output data of the real camera 12, the pointer 324 is behind the pointer 323 and cannot be detected. In the fourth embodiment, the three-dimensional data is supplemented by the output data of the real camera 13 with different installation positions and line-of-sight directions. The virtual three-dimensional coordinates of the indicated position on the body 324 can be output.
 言い換えると、2以上の実カメラ12,13の出力データに基づいて、指示位置の仮想3次元座標を出力するため、指示位置を検出し損なうという問題を回避できる。 In other words, since the virtual three-dimensional coordinates of the pointed position are output based on the output data of two or more real cameras 12 and 13, the problem of failing to detect the pointed position can be avoided.
 操作入力信号生成部217は、接触判定部216が接触を判定した指示位置の仮想タッチパネル上の座標の時間変化に基づいて操作入力信号を生成する。このとき、2以上の異なる指示位置の接触を判定した場合には、予め設定した2以上の接触に対応する操作入力信号を生成する。 The operation input signal generation unit 217 generates an operation input signal based on the time change of the coordinates on the virtual touch panel of the pointing position determined by the contact determination unit 216 to be in contact. At this time, when contact at two or more different designated positions is determined, operation input signals corresponding to two or more preset contacts are generated.
 以上説明したように、本実施の形態4においては、操作入力システム1は、互いに離れて固定され、視線方向が指示体323,324の指示方向に対して傾斜した方向であって互いに異なる方向である2以上の実カメラ12,13を備える。3次元データ生成部212は、2以上の実カメラ12,13の出力データに基づいて3次元データを生成し、座標変換部214は、マスターカメラである実カメラ12に係るパラメータに基づく座標変換方法を用いて、仮想3次元データに変換する。そして、接触判定部216は、仮想3次元データに基づいて算出した方向Vにおける仮想カメラ312から指示体323,324までの距離の1以上の極小値である距離Vzを取得し、1以上の指示体323,324の指示位置を検出することとした。これにより、一方の実カメラ12では未検出の指示体324も他方の実カメラ13で検出することが可能となり、複数のタッチによる確実な操作入力が可能になる。 As described above, in the fourth embodiment, the operation input system 1 is fixed apart from each other, and the line-of-sight directions are tilted with respect to the pointing directions of the indicators 323 and 324 and are different from each other. Two or more real cameras 12 and 13 are provided. A three-dimensional data generation unit 212 generates three-dimensional data based on the output data of two or more real cameras 12 and 13, and a coordinate transformation unit 214 performs a coordinate transformation method based on parameters related to the real camera 12, which is a master camera. is used to convert to virtual three-dimensional data. Then, the contact determination unit 216 acquires the distance Vz, which is one or more minimum values of the distances from the virtual camera 312 to the pointers 323 and 324 in the direction V calculated based on the virtual three-dimensional data, and obtains one or more instruction points. The indicated positions of the bodies 323 and 324 are to be detected. As a result, the pointer 324 that has not been detected by one of the real cameras 12 can also be detected by the other real camera 13, and reliable operation input by a plurality of touches is possible.
 なお、上記実施の形態1-4において、実カメラ10-13は、表示手段26の端部に設置し、座標変換に係るパラメータを管理者が設定するとしたが、実カメラ10-13は、操作入力装置20に内蔵されていてもよい。この場合、操作入力装置20の組み立て時に実カメラ10-13の視線方向を表示手段26の表示面に垂直の方向に対して傾斜して実装し、その傾斜角度に応じて決定されている座標変換方法を用いてもよい。 In Embodiments 1-4 above, the real camera 10-13 is installed at the end of the display means 26, and the administrator sets parameters related to coordinate transformation. It may be built in the input device 20 . In this case, when the operation input device 20 is assembled, the line-of-sight direction of the real camera 10-13 is tilted with respect to the direction perpendicular to the display surface of the display means 26, and the coordinate transformation is determined according to the tilt angle. method may be used.
 また、上記実施の形態1-4の構成に加えて、図11に示すように、表示手段26及び実カメラ10の前面に、透明板316を更に設置してもよい。透明板316は、表示手段26が出力する光を透過する任意の透明な板であり、例えば、ガラス板又はアクリル板である。表示手段26及び実カメラ10を保護するとともに、オペレータに一定距離離れた空間での操作を促すことができる。 Further, in addition to the configuration of Embodiments 1 to 4 above, a transparent plate 316 may be further installed in front of the display means 26 and the real camera 10, as shown in FIG. The transparent plate 316 is an arbitrary transparent plate that transmits the light output by the display means 26, such as a glass plate or an acrylic plate. The display means 26 and the real camera 10 can be protected, and the operator can be urged to operate in a space a certain distance away.
 また、上記実施の形態1-4において、1つの表示手段26の矩形の表示面に対応する1つの矩形かつ平面の仮想タッチパネル311を想定して操作入力を検出するとしたが、仮想タッチパネル311の形状及び大きさは任意であり、また平面で無くてもよい。また、1つの表示手段26の表示面に対し、2以上の仮想タッチパネル311を想定して、実カメラ10-13による3次元データを、2以上の仮想カメラ312による座標変換を行い、それぞれの仮想タッチパネル311に対する操作入力を検出してもよい。これらの場合、パラメータ取得部213が、仮想タッチパネル311の形状及び大きさを表すパラメータも含めて取得し、座標変換部214が仮想タッチパネル311の形状及び大きさを表すパラメータに応じて、3次元データの座標値の拡大、縮小又は変形を含む座標変換を行う。 Further, in Embodiments 1 to 4 above, it is assumed that one rectangular and flat virtual touch panel 311 corresponding to one rectangular display surface of the display means 26 is used to detect an operation input. And the size is arbitrary, and it does not have to be flat. Assuming two or more virtual touch panels 311 on the display surface of one display means 26, the three-dimensional data obtained by the real cameras 10-13 are subjected to coordinate transformation by two or more virtual cameras 312, and each virtual An operation input to the touch panel 311 may be detected. In these cases, the parameter acquisition unit 213 acquires parameters including the shape and size of the virtual touch panel 311, and the coordinate conversion unit 214 converts three-dimensional data according to the parameters representing the shape and size of the virtual touch panel 311. Coordinate transformation including enlargement, reduction or transformation of the coordinate values of
 また、上記実施の形態4において、2以上の実カメラ12,13を用いて、1以上の指示位置を検出するとしたが、1つの実カメラ10を用いた実施の形態1-3においても、仮想カメラ312から指示体321までの距離の1以上の極小値である距離Vzを取得し、1以上の指示体321の指示位置を検出してもよい。 Further, in Embodiment 4 above, two or more real cameras 12 and 13 are used to detect one or more indicated positions. A distance Vz that is one or more minimum values of the distance from the camera 312 to the indicator 321 may be acquired, and the indicated position of one or more indicators 321 may be detected.
 また、上記実施の形態に示したハードウェア構成及びフローチャートは一例であり、任意に変更及び応用が可能である。CPU21で実現する各機能は、専用のシステムによらず、通常のコンピュータシステムを用いて実現可能である。 Also, the hardware configuration and flowcharts shown in the above embodiments are examples, and can be arbitrarily changed and applied. Each function realized by the CPU 21 can be realized using a normal computer system without depending on a dedicated system.
 例えば、上記実施の形態の動作を実行するためのプログラムを、コンピュータが読み取り可能なCD-ROM(Compact Disc Read-Only Memory)、DVD(Digital Versatile Disc)、MO(Magneto Optical Disc)、メモリカード等の記録媒体に格納して配布し、プログラムをコンピュータにインストールすることにより、各機能を実現することができるコンピュータを構成してもよい。そして、各機能をOS(Operating System)とアプリケーションとの分担、又はOSとアプリケーションとの協同により実現する場合には、OS以外の部分のみを記録媒体に格納してもよい。 For example, a computer-readable CD-ROM (Compact Disc Read-Only Memory), DVD (Digital Versatile Disc), MO (Magneto Optical Disc), memory card, etc. A computer capable of realizing each function may be configured by storing and distributing the program in a recording medium of the above and installing the program in the computer. When each function is shared between an OS (Operating System) and an application, or by cooperation between the OS and an application, only portions other than the OS may be stored in the recording medium.
 本発明は、本発明の広義の精神と範囲を逸脱することなく、様々な実施の形態及び変形が可能とされるものである。また、上述した実施の形態は、本発明を説明するためのものであり、本発明の範囲を限定するものではない。すなわち、本発明の範囲は、実施の形態ではなく、請求の範囲によって示される。そして、請求の範囲内及びそれと同等の発明の意義の範囲内で施される様々な変形が、本発明の範囲内とみなされる。 Various embodiments and modifications of the present invention are possible without departing from the broad spirit and scope of the present invention. Moreover, the embodiment described above is for explaining the present invention, and does not limit the scope of the present invention. That is, the scope of the present invention is indicated by the claims rather than the embodiments. Various modifications made within the scope of the claims and within the meaning of the invention equivalent thereto are considered to be within the scope of the present invention.
 本出願は、2021年3月10日に出願された、日本国特許出願特願2021-038462号に基づく。本明細書中に日本国特許出願特願2021-038462号の明細書、特許請求の範囲、図面全体を参照として取り込むものとする。 This application is based on Japanese Patent Application No. 2021-038462 filed on March 10, 2021. The entire specification, claims, and drawings of Japanese Patent Application No. 2021-038462 are incorporated herein by reference.
 1…操作入力システム
 10,11,12,13…実カメラ
 20…操作入力装置
 21…CPU
 22…RAM
 23…ROM
 24…通信インターフェース
 25…記憶部
 26…表示手段
 211…データ取得部
 212…3次元データ生成部
 213…パラメータ取得部
 214…座標変換部
 215…指示体検出部
 216…接触判定部
 217…操作入力信号生成部
 311…仮想タッチパネル
 312…仮想カメラ
 313…特定点
 315…ホバー面
 316…透明板
 321,323,324…指示体
 322…指示位置
 330…指示有効領域
 331…他の操作部
DESCRIPTION OF SYMBOLS 1... Operation input system 10, 11, 12, 13... Actual camera 20... Operation input device 21... CPU
22 RAM
23 ROM
24 Communication interface 25 Storage unit 26 Display unit 211 Data acquisition unit 212 Three-dimensional data generation unit 213 Parameter acquisition unit 214 Coordinate conversion unit 215 Pointer detection unit 216 Contact determination unit 217 Operation input signal Generation part 311... Virtual touch panel 312... Virtual camera 313... Specific point 315... Hover surface 316... Transparent plate 321, 323, 324... Indicator 322... Indication position 330... Indication effective area 331... Other operation part

Claims (13)

  1.  表示手段の表示面に対して固定され、視線方向が指示体の指示方向に対して傾斜した方向である実カメラの出力データに基づいて、3次元データを生成する3次元データ生成部と、
     前記実カメラの前記視線方向に延びた第1直線上の点であって、前記実カメラから予め定めた第1距離にある特定点を中心として、前記第1直線を予め定めた角度だけ回転した第2直線の方向を視線方向とする仮想カメラを想定して、前記実カメラによる座標空間の前記3次元データを、前記仮想カメラによる座標空間の仮想3次元データに変換する座標変換部と、
     前記仮想カメラから前記第2直線に平行な方向に予め定めた第2距離だけ離れた面を仮想タッチパネルとしたとき、前記仮想3次元データに基づいて算出した、前記第2直線に平行な方向における前記仮想カメラから前記指示体までの最短距離である第3距離と、前記仮想カメラから前記仮想タッチパネルまでの前記第2距離と、を比較し、前記第3距離が前記第2距離以下であるときに、前記仮想タッチパネルに前記指示体が接触したことを判定する接触判定部と、
     前記接触判定部により前記指示体が接触したと判定されたときに、前記仮想3次元データに基づく操作入力信号を生成する操作入力信号生成部と、を備える、
     操作入力装置。
    a three-dimensional data generation unit that generates three-dimensional data based on output data of a real camera that is fixed with respect to a display surface of a display means and whose line-of-sight direction is inclined with respect to the pointing direction of the indicator;
    The first straight line is rotated by a predetermined angle about a point on the first straight line extending in the line-of-sight direction of the real camera and located at a predetermined first distance from the real camera. a coordinate transformation unit for transforming the three-dimensional data in the coordinate space of the real camera into virtual three-dimensional data in the coordinate space of the virtual camera, assuming a virtual camera whose viewing direction is the direction of the second straight line;
    When a surface separated from the virtual camera by a predetermined second distance in a direction parallel to the second straight line is a virtual touch panel, the virtual touch panel is calculated based on the virtual three-dimensional data, and the A third distance, which is the shortest distance from the virtual camera to the pointer, is compared with the second distance from the virtual camera to the virtual touch panel, and when the third distance is equal to or less than the second distance a contact determination unit that determines that the pointer has come into contact with the virtual touch panel;
    an operation input signal generation unit that generates an operation input signal based on the virtual three-dimensional data when the contact determination unit determines that the pointer has made contact;
    Operation input device.
  2.  前記第1直線から前記第2直線までの回転の角度と、前記実カメラから前記特定点までの前記第1距離と、前記仮想カメラから前記仮想タッチパネルまでの前記第2距離と、を含むパラメータを取得するパラメータ取得部を、更に備え、
     前記座標変換部は、前記パラメータを用いて、前記実カメラから前記仮想カメラまでの、前記特定点を中心とする回転で表される座標変換を行う、
     請求項1に記載の操作入力装置。
    a parameter including a rotation angle from the first straight line to the second straight line, the first distance from the real camera to the specific point, and the second distance from the virtual camera to the virtual touch panel; further comprising a parameter acquisition unit that acquires
    The coordinate transformation unit uses the parameters to perform coordinate transformation represented by rotation about the specific point from the real camera to the virtual camera.
    The operation input device according to claim 1.
  3.  前記実カメラの基準点を通り、前記仮想タッチパネルの延在方向に平行な面を基準面としたとき、前記仮想カメラの基準点は、前記基準面上にあり、
     前記第1距離は前記実カメラの基準点から前記特定点までの距離であり、前記第2距離は前記基準面から前記仮想タッチパネルまでの距離であり、前記第3距離は前記基準面から前記指示体までの最短距離であって、
     前記座標変換部は、前記実カメラから前記仮想カメラまでの、前記特定点を中心とする回転、及び、当該回転後の位置から前記基準点までの前記第2直線に沿った平行移動で表される座標変換を行う、
     請求項1又は2に記載の操作入力装置。
    When a plane passing through the reference point of the real camera and parallel to the extending direction of the virtual touch panel is taken as a reference plane, the reference point of the virtual camera is on the reference plane,
    The first distance is the distance from the reference point of the real camera to the specific point, the second distance is the distance from the reference plane to the virtual touch panel, and the third distance is the distance from the reference plane to the instruction. is the shortest distance to the body,
    The coordinate transformation unit is represented by rotation from the real camera to the virtual camera about the specific point, and translation along the second straight line from the post-rotation position to the reference point. perform a coordinate transformation that
    The operation input device according to claim 1 or 2.
  4.  前記実カメラから前記仮想カメラまでの前記特定点を中心とする回転は、前記特定点を通り前記仮想タッチパネルの延在方向に平行な直線を軸とする回転である、
     請求項2又は3に記載の操作入力装置。
    The rotation from the real camera to the virtual camera about the specific point is rotation about a straight line that passes through the specific point and is parallel to the extending direction of the virtual touch panel.
    The operation input device according to claim 2 or 3.
  5.  前記仮想タッチパネルは、前記第2直線に垂直な方向に延在した形状を有し、前記第2直線が前記仮想タッチパネルの中心を通る、
     請求項1から4のいずれか1項に記載の操作入力装置。
    The virtual touch panel has a shape extending in a direction perpendicular to the second straight line, and the second straight line passes through the center of the virtual touch panel.
    The operation input device according to any one of claims 1 to 4.
  6.  前記接触判定部は、前記仮想タッチパネルに対して前記表示手段と反対の方向に離隔して配置した、前記仮想タッチパネルに平行なホバー面を想定し、前記第2直線に平行な方向における前記ホバー面から前記仮想カメラまでの距離を第4距離としたとき、前記第3距離が前記第4距離以下になった場合に、前記指示体が前記ホバー面より前記仮想タッチパネルに近い位置にあることを判定し、
     前記操作入力信号生成部は、前記表示手段に、前記指示体が前記仮想タッチパネルに近づいていることを表す表示を行う操作入力信号を生成する、
     請求項1から5のいずれか1項に記載の操作入力装置。
    The contact determination unit assumes a hover plane parallel to the virtual touch panel, which is spaced apart from the virtual touch panel in a direction opposite to the display means, and the hover plane in a direction parallel to the second straight line. to the virtual camera is a fourth distance, and if the third distance is equal to or less than the fourth distance, it is determined that the pointer is at a position closer to the virtual touch panel than the hover surface. death,
    The operation input signal generation unit generates an operation input signal for displaying on the display means that the pointer is approaching the virtual touch panel.
    The operation input device according to any one of claims 1 to 5.
  7.  前記接触判定部は、前記座標変換部が出力する前記仮想3次元データのうち、予め定めた指示有効領域内のデータから算出した、前記仮想カメラから前記指示体までの最短距離である前記第3距離に基づいて、接触を判定する、
     請求項1から6のいずれか1項に記載の操作入力装置。
    The contact determination unit calculates the third distance from the virtual camera to the pointer, which is the shortest distance from the virtual camera to the pointer, which is calculated from data within a predetermined instruction effective area among the virtual three-dimensional data output from the coordinate conversion unit. determine contact based on distance;
    The operation input device according to any one of claims 1 to 6.
  8.  前記3次元データ生成部は、互いに離れて固定され、視線方向が前記指示体の前記指示方向に対して傾斜した方向であって互いに異なる方向である2以上の前記実カメラの出力データ及び2以上の前記実カメラの位置情報に基づいて、前記3次元データを生成する、
     請求項1から7のいずれか1項に記載の操作入力装置。
    The three-dimensional data generation unit includes output data of two or more of the real cameras fixed apart from each other and having line-of-sight directions that are inclined with respect to the pointing direction of the pointer and in different directions, and two or more generating the three-dimensional data based on the position information of the real camera of
    The operation input device according to any one of claims 1 to 7.
  9.  前記座標変換部は、事前に、2以上の前記実カメラに対して、単一の前記3次元データを生成するためのキャリブレーションを実施する、
     請求項8に記載の操作入力装置。
    The coordinate transformation unit performs calibration for generating the single three-dimensional data for the two or more real cameras in advance,
    The operation input device according to claim 8.
  10.  前記接触判定部は、前記仮想3次元データに基づいて算出した、前記第2直線に平行な方向における前記仮想カメラから前記指示体までの距離の1以上の極小値である1以上の前記第3距離と、前記第2直線に平行な方向における前記仮想カメラから前記仮想タッチパネルまでの前記第2距離と、を比較し、1以上の前記第3距離が前記第2距離以下であるときに、前記仮想タッチパネルに1以上の前記指示体が接触したことを判定し、
     前記操作入力信号生成部は、前記接触判定部が接触を判定した前記指示体の数に応じた前記操作入力信号を生成する、
     請求項1から9のいずれか1項に記載の操作入力装置。
    The contact determination unit calculates one or more third distances, which are one or more minimum values of a distance from the virtual camera to the pointer in a direction parallel to the second straight line, calculated based on the virtual three-dimensional data. The distance is compared with the second distance from the virtual camera to the virtual touch panel in the direction parallel to the second straight line, and when one or more of the third distances is less than or equal to the second distance, the determining that one or more of the indicators have come into contact with the virtual touch panel;
    The operation input signal generation unit generates the operation input signal according to the number of the pointers determined to be in contact by the contact determination unit.
    The operation input device according to any one of claims 1 to 9.
  11.  表示手段の表示面に対して固定され、視線方向が指示体の指示方向に対して傾斜した方向である実カメラの出力データに基づいて、3次元データを生成する3次元データ生成ステップと、
     前記実カメラの前記視線方向に延びた第1直線上の点であって、前記実カメラから予め定めた第1距離にある特定点を中心として、前記第1直線を予め定めた角度だけ回転した第2直線の方向を視線方向とする仮想カメラを想定して、前記実カメラによる座標空間の前記3次元データを、前記仮想カメラによる座標空間の仮想3次元データに変換する座標変換ステップと、
     前記仮想カメラから前記第2直線に平行な方向に予め定めた第2距離だけ離れた面を仮想タッチパネルとしたとき、前記仮想3次元データに基づいて算出した、前記第2直線に平行な方向における前記仮想カメラから前記指示体までの最短距離である第3距離と、前記仮想カメラから前記仮想タッチパネルまでの前記第2距離と、を比較し、前記第3距離が前記第2距離以下であるときに、前記仮想タッチパネルに前記指示体が接触したことを判定する接触判定ステップと、
     前記接触判定ステップで前記指示体が接触したと判定されたときに、前記仮想3次元データに基づく操作入力信号を生成する操作入力信号生成ステップと、を有する、
     操作入力方法。
    a three-dimensional data generation step of generating three-dimensional data based on output data of a real camera fixed with respect to a display surface of a display means and having a line-of-sight direction inclined with respect to a pointing direction of an indicator;
    The first straight line is rotated by a predetermined angle about a point on the first straight line extending in the line-of-sight direction of the real camera and located at a predetermined first distance from the real camera. a coordinate transformation step of transforming the three-dimensional data in the coordinate space of the real camera into virtual three-dimensional data in the coordinate space of the virtual camera, assuming a virtual camera whose viewing direction is the direction of the second straight line;
    When a surface separated from the virtual camera by a predetermined second distance in a direction parallel to the second straight line is a virtual touch panel, the virtual touch panel is calculated based on the virtual three-dimensional data, and the When a third distance, which is the shortest distance from the virtual camera to the pointer, is compared with the second distance from the virtual camera to the virtual touch panel, and the third distance is equal to or less than the second distance a contact determination step of determining that the pointer has come into contact with the virtual touch panel;
    an operation input signal generation step of generating an operation input signal based on the virtual three-dimensional data when it is determined in the contact determination step that the pointer has made contact;
    Manipulation input method.
  12.  コンピュータを、
     表示手段の表示面に対して固定され、視線方向が指示体の指示方向に対して傾斜した方向である実カメラの出力データに基づいて、3次元データを生成する3次元データ生成部、
     前記実カメラの前記視線方向に延びた第1直線上の点であって、前記実カメラから予め定めた第1距離にある特定点を中心として、前記第1直線を予め定めた角度だけ回転した第2直線の方向を視線方向とする仮想カメラを想定して、前記実カメラによる座標空間の前記3次元データを、前記仮想カメラによる座標空間の仮想3次元データに変換する座標変換部、
     前記仮想カメラから前記第2直線に平行な方向に予め定めた第2距離だけ離れた面を仮想タッチパネルとしたとき、前記仮想3次元データに基づいて算出した、前記第2直線に平行な方向における前記仮想カメラから前記指示体までの最短距離である第3距離と、前記仮想カメラから前記仮想タッチパネルまでの前記第2距離と、を比較し、前記第3距離が前記第2距離以下であるときに、前記仮想タッチパネルに前記指示体が接触したことを判定する接触判定部、
     前記接触判定部により前記指示体が接触したと判定されたときに、前記仮想3次元データに基づく操作入力信号を生成する操作入力信号生成部、
     として機能させるためのプログラム。
    the computer,
    a three-dimensional data generation unit that generates three-dimensional data based on output data of a real camera that is fixed with respect to the display surface of the display means and whose line-of-sight direction is inclined with respect to the pointing direction of the indicator;
    The first straight line is rotated by a predetermined angle about a point on the first straight line extending in the line-of-sight direction of the real camera and located at a predetermined first distance from the real camera. A coordinate transformation unit that transforms the three-dimensional data in the coordinate space of the real camera into virtual three-dimensional data in the coordinate space of the virtual camera, assuming a virtual camera whose viewing direction is the direction of the second straight line;
    When a surface separated from the virtual camera by a predetermined second distance in a direction parallel to the second straight line is a virtual touch panel, the virtual touch panel is calculated based on the virtual three-dimensional data, and the A third distance, which is the shortest distance from the virtual camera to the pointer, is compared with the second distance from the virtual camera to the virtual touch panel, and when the third distance is equal to or less than the second distance a contact determination unit that determines that the pointer has come into contact with the virtual touch panel;
    an operation input signal generation unit that generates an operation input signal based on the virtual three-dimensional data when the contact determination unit determines that the pointer has made contact;
    A program to function as
  13.  表示手段の表示面に対して固定され、視線方向が指示体の指示方向に対して傾斜した方向である実カメラの出力データに基づいて、3次元データを生成する3次元データ生成部と、
     前記実カメラの前記視線方向に延びた第1直線上の特定点を中心として、前記第1直線を予め定めた角度だけ回転した第2直線の方向を視線方向とする仮想カメラを想定して、前記実カメラによる座標空間の前記3次元データを、前記仮想カメラによる座標空間の仮想3次元データに変換する座標変換部と、
     前記仮想カメラから前記第2直線に平行な方向に離れた面に仮想タッチパネルを想定したとき、前記仮想3次元データに基づいて算出した、前記第2直線に平行な方向における前記仮想カメラから前記指示体までの最短距離が、前記仮想カメラから前記仮想タッチパネルまでの距離以下であるときに、前記仮想タッチパネルに前記指示体が接触したことを判定する接触判定部と、
     前記接触判定部により前記指示体が接触したと判定されたときに、前記仮想3次元データに基づく操作入力信号を生成する操作入力信号生成部と、を備える、
     操作入力装置。
    a three-dimensional data generation unit that generates three-dimensional data based on output data of a real camera that is fixed with respect to a display surface of a display means and whose line-of-sight direction is inclined with respect to the pointing direction of the indicator;
    Assuming a virtual camera whose line-of-sight direction is the direction of a second line obtained by rotating the first line by a predetermined angle about a specific point on the first line extending in the line-of-sight direction of the real camera, a coordinate transformation unit that transforms the three-dimensional data in the coordinate space of the real camera into virtual three-dimensional data in the coordinate space of the virtual camera;
    Assuming a virtual touch panel on a surface away from the virtual camera in a direction parallel to the second straight line, the instruction from the virtual camera in a direction parallel to the second straight line calculated based on the virtual three-dimensional data a contact determination unit that determines that the pointer has come into contact with the virtual touch panel when the shortest distance to the body is equal to or less than the distance from the virtual camera to the virtual touch panel;
    an operation input signal generation unit that generates an operation input signal based on the virtual three-dimensional data when the contact determination unit determines that the pointer has made contact;
    Operation input device.
PCT/JP2022/010548 2021-03-10 2022-03-10 Operation input device, operation input method, and program WO2022191276A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023505628A JP7452917B2 (en) 2021-03-10 2022-03-10 Operation input device, operation input method and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-038462 2021-03-10
JP2021038462 2021-03-10

Publications (1)

Publication Number Publication Date
WO2022191276A1 true WO2022191276A1 (en) 2022-09-15

Family

ID=83228067

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/010548 WO2022191276A1 (en) 2021-03-10 2022-03-10 Operation input device, operation input method, and program

Country Status (2)

Country Link
JP (1) JP7452917B2 (en)
WO (1) WO2022191276A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175623A (en) * 2010-01-29 2011-09-08 Shimane Prefecture Image recognition apparatus, operation determination method, and program
JP2012137989A (en) * 2010-12-27 2012-07-19 Sony Computer Entertainment Inc Gesture operation input processor and gesture operation input processing method
WO2014034031A1 (en) * 2012-08-30 2014-03-06 パナソニック株式会社 Information input device and information display method
JP2016134022A (en) * 2015-01-20 2016-07-25 エヌ・ティ・ティ アイティ株式会社 Virtual touch panel pointing system
JP2019219820A (en) * 2018-06-18 2019-12-26 チームラボ株式会社 Video display system, video display method and computer program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4608326B2 (en) 2005-01-26 2011-01-12 株式会社竹中工務店 Instruction motion recognition device and instruction motion recognition program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175623A (en) * 2010-01-29 2011-09-08 Shimane Prefecture Image recognition apparatus, operation determination method, and program
JP2012137989A (en) * 2010-12-27 2012-07-19 Sony Computer Entertainment Inc Gesture operation input processor and gesture operation input processing method
WO2014034031A1 (en) * 2012-08-30 2014-03-06 パナソニック株式会社 Information input device and information display method
JP2016134022A (en) * 2015-01-20 2016-07-25 エヌ・ティ・ティ アイティ株式会社 Virtual touch panel pointing system
JP2019219820A (en) * 2018-06-18 2019-12-26 チームラボ株式会社 Video display system, video display method and computer program

Also Published As

Publication number Publication date
JP7452917B2 (en) 2024-03-19
JPWO2022191276A1 (en) 2022-09-15

Similar Documents

Publication Publication Date Title
US9001208B2 (en) Imaging sensor based multi-dimensional remote controller with multiple input mode
JP5921835B2 (en) Input device
KR100851977B1 (en) Controlling Method and apparatus for User Interface of electronic machine using Virtual plane.
JP6371475B2 (en) Eye-gaze input device, eye-gaze input method, and eye-gaze input program
JP5808712B2 (en) Video display device
US10318152B2 (en) Modifying key size on a touch screen based on fingertip location
US8929628B2 (en) Measuring device and measuring method
EP3032375B1 (en) Input operation system
JP2014197380A (en) Image projector, system, image projection method and program
JP6176013B2 (en) Coordinate input device and image processing device
US20110193969A1 (en) Object-detecting system and method by use of non-coincident fields of light
JP2015212927A (en) Input operation detection device, image display device including input operation detection device, and projector system
US20160191875A1 (en) Image projection apparatus, and system employing interactive input-output capability
JP2016103137A (en) User interface system, image processor and control program
JP2014115876A (en) Remote operation method of terminal to be operated using three-dimentional touch panel
JP2014115876A5 (en)
US20120056808A1 (en) Event triggering method, system, and computer program product
WO2022191276A1 (en) Operation input device, operation input method, and program
EP3032380B1 (en) Image projection apparatus, and system employing interactive input-output capability
JP2018018308A (en) Information processing device and control method and computer program therefor
JP6898021B2 (en) Operation input device, operation input method, and program
JP6555958B2 (en) Information processing apparatus, control method therefor, program, and storage medium
KR101573287B1 (en) Apparatus and method for pointing in displaying touch position electronic device
EP3059664A1 (en) A method for controlling a device by gestures and a system for controlling a device by gestures
JP2016024518A (en) Coordinate detection system, coordinate detection method, information processing device and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22767230

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023505628

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22767230

Country of ref document: EP

Kind code of ref document: A1