WO2011018901A1 - 画像認識装置および操作判定方法並びにプログラム - Google Patents
画像認識装置および操作判定方法並びにプログラム Download PDFInfo
- Publication number
- WO2011018901A1 WO2011018901A1 PCT/JP2010/005058 JP2010005058W WO2011018901A1 WO 2011018901 A1 WO2011018901 A1 WO 2011018901A1 JP 2010005058 W JP2010005058 W JP 2010005058W WO 2011018901 A1 WO2011018901 A1 WO 2011018901A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- operator
- image
- operation surface
- virtual
- movement
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
- G06F3/0317—Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Definitions
- the present invention relates to an image recognition apparatus and an operation determination method, and more particularly, to an image recognition apparatus and an operation determination method for determining an operation of a measurement target from an image captured by a video camera or the like.
- Patent Document 1 includes a host computer that recognizes the shape and movement of an object in an image captured by a CCD camera, and a display that displays the shape and movement of the object recognized by the host computer. If you give an instruction by hand gesture or the like, the given hand gesture will be displayed on the display screen of the display, you can select the virtual switch etc. displayed on the display screen with the arrow cursor icon by hand gesture, and input devices such as a mouse There has been proposed a technique that enables operation of a very simple device without being required.
- an operation input is performed by recognizing as a kind of gesture from an image obtained by capturing the movement and shape of a finger.
- an operator who faces a large screen usually points to a camera installed at the bottom of the screen.
- the contents are displayed on the large screen.
- the operator's shape and movement are extracted from the image thus captured by a method known in the art, and compared with, for example, a predetermined pattern stored in the database, the operator's shape, Determine the meaning of the movement and use it to control the device.
- the operator's image reading technology can be used to reproduce a stereoscopic image by photographing the operator with a three-dimensional or stereoscopic camera. Used for applications.
- reproducing the stereoscopic image it is possible to grasp the movement of the operator in a three-dimensional manner. For example, since the movement of the operator, particularly the movement of the hand, can be recognized back and forth, the gesture can be compared with that using a two-dimensional image. Variety increases. Also, even if multiple operators are extracted as images, if it is a large number of people, it is a three-dimensional image, so the context can be known, and only the movement of the first operator can be extracted and used for operation input .
- the conventional gesture detection device is difficult to capture the clear intention of the operator.
- An object is to provide an image recognition device and an operation determination method that enable determination.
- the invention described in claim 1 is an image recognition apparatus comprising: a three-dimensional imaging unit that reads an operator's image and generates stereoscopic image data; and a three-dimensional imaging unit.
- the operation surface forming means for forming the virtual operation surface, and the movement of at least a part of the image of the operator with respect to the formed virtual operation surface is read by the three-dimensional imaging means,
- Operation determining means for determining whether or not the movement is an operation based on a positional relationship between a part of the operator and the virtual operation surface, and outputs a predetermined signal when the movement is determined to be an operation.
- a signal output means is a signal output means.
- the operation determining unit determines that the operation is performed when a part of the operator is located closer to the three-dimensional imaging unit than the virtual operation surface. It is characterized by that.
- the operation determining means is a shape of a part of the operator that is closer to the three-dimensional imaging means than the virtual operation surface or It is characterized by determining which operation is performed by movement.
- the operation determining means searches for a storage means for storing operation contents associated with a part of the shape or movement of the operator in advance. The operation corresponding to the matching shape or movement is determined as the input operation.
- the image recognition device further includes an image display unit disposed so as to face the operator, and the operation determination unit is operated by the operator.
- the present operation determination result is displayed on the image display means so that the operation determination result can be recognized.
- the image recognition device further includes image display means arranged to face the operator, and the operator is within the virtual operation hierarchy area.
- image display means arranged to face the operator, and the operator is within the virtual operation hierarchy area.
- the operator on the opposite side of the three-dimensional imaging means with respect to the virtual operation surface formed by the operation surface forming means is characterized by comprising an image display means that allows an operator to visually recognize an operation to be determined by calculating a distance from a positional relationship between a part and a virtual operation surface and displaying a sign that changes according to the distance. To do.
- the image display means stops the change of the indication when a part of the operator is on the three-dimensional imaging means side with respect to the virtual operation surface. It is characterized by showing an operation to be determined.
- operation content determination means is provided for determining the content of the operation based on the operation type pre-assigned to the virtual operation hierarchy and the movement of the operator in the virtual operation hierarchy.
- the operation surface forming unit forms a virtual operation surface at a position corresponding to the position information of the upper body of the operator. It is characterized by.
- the operation surface forming means adjusts the position and angle of the virtual operation surface based on the position of the image display means. It is characterized by.
- an operation determination method for determining an operation content by recognizing an operator's image by an image recognition device, and reading the operator's image to generate stereoscopic image data.
- a step an operation surface forming step for forming a virtual operation surface based on the image of the operator read by the three-dimensional imaging means, and the movement of at least a part of the image of the operator with respect to the formed virtual operation surface,
- An operation determination step that is read by the three-dimensional imaging means and determines whether or not the movement is an operation based on a positional relationship between a part of the operator and the virtual operation surface, and when the movement is determined to be an operation
- a signal output step for outputting a predetermined signal.
- a program for causing an image recognition apparatus to execute an operation determination method for recognizing an operator's image and determining an operation content.
- the operation determination method reads an operator's image.
- a three-dimensional imaging step for generating stereoscopic image data, an operation surface forming step for forming a virtual operation surface based on the operator's image read by the three-dimensional imaging means, and an operator's operation on the formed virtual operation surface
- a signal output step of outputting a predetermined signal when it is determined that the operation is an operation.
- the present invention includes a three-dimensional imaging unit that reads an operator image to generate stereoscopic image data, and an operation surface forming unit that forms a virtual operation surface based on the operator image read by the three-dimensional imaging unit.
- the movement of at least a part of the image of the operator with respect to the formed virtual operation surface is read by the three-dimensional imaging means, and the movement is an operation based on the positional relationship between the part of the operator and the virtual operation surface.
- FIG. 1 is a diagram illustrating an example of an operation input system according to the present embodiment.
- FIG. 2 is a block diagram schematically showing a relationship between the operation input system of this embodiment and a computer.
- FIG. 3 is a block diagram illustrating an example of functional modules of a program processed in the CPU of the computer according to the present embodiment.
- FIG. 4 is a flowchart of processing according to this embodiment.
- FIG. 5 is a diagram illustrating a virtual operation surface formed based on the operation surface formation reference according to the embodiment of the present invention.
- FIG. 6 is a diagram illustrating a virtual operation surface formed based on the operation surface formation reference according to the embodiment of the present invention.
- FIG. 1 is a diagram illustrating an example of an operation input system according to the present embodiment.
- FIG. 2 is a block diagram schematically showing a relationship between the operation input system of this embodiment and a computer.
- FIG. 3 is a block diagram illustrating an example of functional modules of a program processed in the CPU of the computer according to
- FIG. 7 is a diagram illustrating an example of an image when a plurality of operator images are captured using a conventional 3D camera.
- FIG. 8 is a diagram showing an example of operation area setting for operation input support according to an embodiment of the present invention.
- FIG. 9 is a diagram illustrating an example of adjustment of the operation region depending on the screen or the position of the camera according to the embodiment of the present invention.
- FIG. 10 is a diagram illustrating another example of the adjustment of the operation area according to the screen or the position of the camera according to the embodiment of the present invention.
- FIG. 11 is a diagram showing another example of the adjustment of the operation area depending on the screen or the position of the camera according to the embodiment of the present invention.
- FIG. 12 is a diagram for explaining a method of adjusting the operation area according to the screen or the position of the camera according to the embodiment of the present invention.
- FIG. 13 is a diagram illustrating an operator image capturing method using a conventional 3D camera.
- FIG. 14 is a diagram illustrating an example of an operation input system using a virtual operation surface based on a marker according to an embodiment of the present invention.
- FIG. 15 is a diagram illustrating an example of a specific operation of the operation input method according to another embodiment of the present invention.
- FIG. 16 is a diagram illustrating an example of adjustment of the operation area depending on the screen or the position of the camera according to the embodiment of the present invention.
- FIG. 17 is a diagram showing an example of a specific display of operation input support according to an embodiment of the present invention.
- FIG. 18 is a diagram illustrating a virtual operation surface and an operation region according to the embodiment of the present invention.
- FIG. 19 is a diagram showing a relationship between an operator's movement and an icon displayed on the screen according to the embodiment of the present invention.
- FIG. 20 is a diagram showing an example of a specific display of the operation input screen according to the embodiment of the present invention.
- FIG. 21 is a diagram showing examples of various icons that can be used on the operation input screen according to the embodiment of the present invention.
- FIG. 22 is a diagram showing a relationship between an operator's movement and an icon displayed on the screen according to the embodiment of the present invention.
- FIG. 23 is a diagram illustrating a state in which the color of the menu button on the operation input screen according to the embodiment of the present invention changes.
- FIG. 24 is a diagram illustrating a state in which the shading of the menu button on the operation input screen according to the embodiment of the present invention changes.
- FIG. 25 is a diagram illustrating an example display screen for inputting an instruction to move a graphic displayed on the screen according to the present embodiment.
- FIG. 26 is a diagram showing a relationship between an operator's movement and a menu displayed on the screen according to the embodiment of the present invention.
- FIG. 27 is a diagram showing a relationship between an operator's movement and a menu displayed on the screen according to another embodiment of the present invention.
- FIG. 28 is a diagram showing a relationship between an operator's movement and a menu displayed on the screen according to still another embodiment of the present invention.
- FIG. 29 is a diagram showing a virtual operation surface and an operation surface formation reference according to an embodiment of the present invention.
- FIG. 30 is a diagram showing an example of adjustment of the operation area depending on the screen or the position of the camera by the projector according to the embodiment of the present invention.
- FIG. 31 is a diagram showing a relationship between an operator's movement and a menu displayed on the screen according to the embodiment of the present invention.
- FIG. 1 is a diagram illustrating an example of an operation input system according to the present embodiment.
- the monitor 111 according to the present embodiment is arranged in front of the operator 102, and the operator 102 considers that the virtual operation surface is at a certain position between the operator 111 and the shape of a finger or the like is determined as an operation. It is possible to perform operations on the operation input system in consideration of becoming a target of the operation.
- the monitor 111 displays various images for various applications intended by the system.
- the monitor 111 supports operation input as described later, that is, for example, displays a portion of the target operator 102 as a screen. It is possible to make the operator 102 recognize an action that can be determined as an operation at the present time.
- the movement of the operator 102 is photographed by the video camera 201, and the photographed image is processed by the computer 110, and is optimal depending on the position, height and arm length of the operator 102, or body dimension information such as height and shoulder width.
- the position and size of the virtual operation surface and the operation area including the virtual operation surface are set, and it is determined what kind of operation the gesture of the portion that protrudes from the virtual operation surface to the monitor 111 side means. That is, the computer 110 creates a stereoscopic image of the operator 102 from the data obtained from the video camera 201, calculates the position of the virtual operation surface, and further positions and arrangement modes of the video camera 201 and the monitor 111 described later. To adjust the position and size of the virtual operation surface, determine whether or not the finger of the operator 102 comes out on the video camera 201 side with reference to the virtual operation surface, and operate the contents with the portion as the operation target Determine.
- the video camera 201 is attached to the upper part of the monitor 111 in order to acquire an image.
- the present invention is not limited to this as long as necessary images can be obtained as shown in FIGS. Any imaging means known in the art can be used, and the installation location can be selected anywhere near the monitor.
- a three-dimensional (or 3D) camera as the video camera 201, a stereoscopic image including the operator can be created.
- a voice output device such as a speaker (not shown) is attached to the system of the present embodiment, and information regarding display contents and operations can be transmitted to the operator by voice.
- a voice output device such as a speaker (not shown) is attached to the system of the present embodiment, and information regarding display contents and operations can be transmitted to the operator by voice.
- the virtual operation surface 701 of the present embodiment is set based on the height and arm length of the operator 102 or body dimension information such as height and shoulder width, and the user 102 stretches his arm in a natural shape.
- the hand 601 can be projected forward with the operation surface 701 as a reference to show a gesture.
- the action is determined by pushing (determining) the operation to the front of the virtual operation surface, or the operation is determined and then pushed out. Therefore, it is easy for the user to recognize and the operability is close to that of a conventional touch panel operation.
- operation variations are overwhelmingly larger than conventional touch panels (two-hand operation, behavior, multiple fingers, etc.).
- the virtual operation surface 701 as shown in FIGS. 5 and 6 is formed in real time when the camera 201 captures the image of the operator 102, but until the operator starts the operation. Since the operator's standing position is not constant, the virtual operation surface is not determined and the operation determination is not easy. Therefore, in the present embodiment, the setting process of the virtual operation surface is started when the operator's body is stationary for a certain time within the imaging range of the three-dimensional camera.
- the virtual operation surface of the present embodiment can be formed in real time, but even in this case, the operation position can be limited by limiting the operator's standing position to a certain range optimum for the system.
- the determination can be made more accurate.
- a footprint indicating the standing position is drawn on the floor, or the operator can recognize the presence of a certain limited range by placing a monitor or system, or make a screen to operate within a certain range. It can also be made to do.
- the position and size of the virtual operation surface that the operator can recognize naturally is greatly influenced by the positional relationship between the operator and the monitor, and the position of the monitor, camera, operator, etc. are assumed in advance throughout the system. Since it is better, by restricting in this way, the operator can generally perform an operation by estimating the position where the virtual operation surface exists.
- the person 710 in the front row is identified as the operator 102 and virtual Form the operating surface.
- which of the plurality of persons to select as the operator 102 can be variously determined depending on the system.
- malfunctions and input errors can be caused by not providing an operation area for other than the foremost priority user. Can be prevented (in case of single input).
- FIG. 2 is a block diagram schematically showing the structure of the computer 110 of the image recognition apparatus of this embodiment.
- the computer 110 is connected to a monitor 701 and connected to a video camera 201 that captures the operator 102 and the like, and the captured image is captured by the computer 110.
- the CPU 210 performs image extraction, position calculation, and the like, which are features of the present embodiment, in the CPU 210, and whether a part of the body comes out from the operation surface to the video camera based on the calculated position. To decide.
- the computer 110 generally includes a CPU 210, executes a program stored in the ROM 211 or the like on the RAM 212, and outputs a processing result based on an image input from the image recognition apparatus to the monitor 111 or the like.
- the monitor 111 mainly outputs various videos provided by various applications that the operator wants to experience, but also displays information that assists the operation input as will be described later. .
- FIG. 3 is a block diagram showing an example of a functional module of a program processed in the CPU 210 of the computer 110 of this embodiment.
- the processing in this system is executed by an image reading unit 301, an image extraction unit 302, an image position calculation unit 303, and an operation determination unit 304.
- the processing from the reception of an image from the video camera 201 to the output of data is executed by four modules.
- the present invention is not limited to this, and other modules are used or fewer modules. Can also be processed.
- a virtual operation surface is formed based on the image of the operator 102 photographed by the video camera 201, and the positions of the hands and fingers that are part of the operator 102 that is also photographed. And processing for calculating the positional relationship between the virtual operation surface 701 and the finger 601 of the operator 102 is performed.
- initial settings known in the art for example, when the image recognition apparatus of the present embodiment is newly installed, a video camera used as a preliminary preparation It is necessary to input information such as the distortion of the used lens 201 and the distance between the monitor 111 and the lens to the apparatus. Further, the threshold setting or the like is adjusted in advance. When the initial setting of the system is completed, the processing of this embodiment is performed. This processing will be described below with reference to FIG.
- FIG. 4 is a flowchart of processing according to this embodiment.
- data captured by the video camera 201 is read by the image reading unit 301 (S401), and an image of the operator is extracted from the data by the image extracting unit 302 (S402).
- a virtual operation surface and an operation area are formed based on the extracted image of the operator 102 (S403).
- the shape of the operation surface is a rectangle standing upright from the floor surface, but is not limited to this, and the operation surface has various shapes and sizes depending on the operation mode of the operator. Can be formed.
- the operation area includes a virtual operation surface that is a feature of the present embodiment, and is an area in which a hand, a finger, or the like that is a main operation of the operator is mainly moved.
- a certain area beyond the virtual operation surface from the operator's trunk is used for the operation recognition of the present invention.
- an adult operator 810 can be formed like an operation region 811 in consideration of the height (the position of the line of sight) and the length of the arm, and the child operator 820 In this case, since the height and the arm become shorter, the operation area 821 can be set accordingly. If a virtual operation surface is set in such an operation area, the operator can move the hand or finger naturally so that the operation intended by the operator can be determined by the movement of the hand or finger. become.
- the depth is up to the fingertip where the operator has extended his hand
- the horizontal width is the length of the left and right wrists when the operator has extended his hand
- the height is the height of the operator.
- the range can be from the head position to the waist position.
- the target person of the system of the present embodiment is an elementary school elementary school student to an adult
- the height width is approximately 100 cm to 195 cm
- the correction width of the vertical position of the operation area or the virtual operation surface is as the height difference. , About 100 cm is required.
- the virtual operation surface and the operation area can be executed each time, can be executed under a certain condition, or the setting timing can be selected in advance or each time.
- the operation determination unit 304 uses the relative relationship between the virtual operation surface on which the operation input system is formed and the operator 102 (S404), and a part of the operator 102 comes to the front as viewed from the video camera 201 on the operation surface. It is determined that the operation has been started (S405), and the shape and movement are assumed in advance from the shape and movement of each part (open hand, two fingers, etc.) and movement. Is determined (S406). Here, what type of shape and movement corresponds to what type of operation can be determined independently by the system, or can be determined by adopting any method known in this technical field. The determination result is executed by the computer 110 assuming that such an operation has been input (S407).
- the process ends (S408).
- the determination of the operation content is not limited to the method described here, and any method known in the present embodiment can be used. Although a specific determination method has also been omitted, generally, the shape and movement of the operator's body, such as a predetermined gesture, and the operation content meaning the same are stored in a database or the like, and after image extraction Then, this database is accessed to determine the operation content. Also in this case, of course, it is possible to improve the determination accuracy by using an image recognition technique, artificial intelligence, or the like by a method known in the technical field.
- the virtual operation surface is formed at which position and with a certain size depending on whether the operator is a child or an adult.
- a three-dimensional camera can measure the distance to an object in parallel or concentric with a CCD or lens surface. If the monitor is installed at the height of the operator's line of sight, the cameras are close to each other, and each is installed vertically on the floor surface, if the operator is also in a standing position, the mutual operation will be It can be said that there is no need for adjustment or correction, such as the positional relationship.
- various situations are assumed for the camera installation position and the positional relationship with the monitor and the operator.
- a virtual operation surface In general, an operator performs an input operation while looking at an operation target screen. Therefore, a virtual operation surface must always be arranged perpendicular to a straight line connecting the operator's line of sight and the operation target screen, and an operation area along the line must be generated. Even if the angle of the push stroke in the operator's Z direction is inconsistent and the operator pushes to the point that the operator is aiming at, the flow will flow along one of the angles along with the push. It will not be possible. Therefore, when forming the virtual operation surface, it is necessary to adjust the angle, size, or position depending on the positions of the monitor, the camera and the operator, and the arrangement mode.
- the operation area 821 and the virtual operation surface 601 are determined according to the operator 820 as shown in FIG. 8, but the camera 201 is arranged on the upper part of the monitor 111 as in the example shown in FIG. 9.
- the virtual operation surface 601 is not perpendicular to the direction 910 in which the operator 820 extends the arm, the operator 820 cannot obtain a good operation feeling on the virtual operation surface. It will not be.
- the virtual operation surface 701 tilts upward so that the operator 820 can look up and operate the monitor 111.
- the field of view 1011 of the camera 201 is tilted at a constant angle with the line-of-sight direction 1010 as in the example shown in FIG. 9, it is necessary to correct the information read by the camera 201 so that it matches the tilted virtual operation surface 701. There is. Further, referring to FIG. 11, since the camera 201 is placed away from the monitor 111 and near the floor, the line of sight 1110 of the operator 820 and the field of view of the camera 201 have a larger angle. is necessary.
- FIG. 12 is a diagram for explaining an example for defining the virtual operation surface 701 and the operation area 821.
- information such as the position of the monitor 111 and the camera 201, the installation method (how much is installed), the standing position of the operator 820, the height, and the like are used. . That is, as an example, first, the virtual operation surface 701 perpendicular to the operator's line of sight is calculated from the eye height (height) of the operator 820 with respect to the monitor 111 and the standing position.
- the angle between the line AB connecting the head and the trunk of the operator 820 and the center line 1210 of the visual field of the camera 201 is measured, and the inclination of the virtual operation surface and the operation area is corrected.
- the arm stroke may be extracted from the image of the operator, or may be determined from the information on the average arm length for each height separately from the obtained height information.
- the position, size, angle, and the like of the virtual operation surface can be set using a marker similar to the operation surface formation reference of the second embodiment described later.
- the virtual operation surface and the operation area of the present embodiment are determined so that natural operation and easier operation determination can be performed based on the positions, arrangement modes, and the like of the camera, the monitor, and the operator. It is determined which operation is being performed by detecting the movement of the operator.
- specific processing that is not described here, such as how to specify the position and shape from the image of the three-dimensional camera, whether or not part of the operator has passed through the virtual operation surface, etc. Processing necessary for implementation of the embodiment can be achieved using any method known in the art.
- Operaation input support As described above, an operator can recognize an operation surface such as a touch panel in space simply by forming a virtual operation surface with a 3D video camera, and various operations can be performed on the operation surface. By doing this, operation input using all or part of the body is possible, but it is easier by supporting operation input such as displaying an operator's image of the virtual operation surface on the monitor 111.
- the system of this embodiment can be utilized.
- FIG. 17 is a diagram illustrating an example in which guidance for assisting such an operation input is displayed on the monitor 111.
- the operator points the desired place by superimposing the virtual operation surface on the displayed image and protruding a finger.
- the operator can execute the next operation while recognizing and confirming the currently performed operation.
- the pointer 901 is displayed on the screen when a finger is protruded from the operation surface, and disappears when the finger is retracted, or is displayed with shading, and is displayed on the monitor and the movement of the hand.
- an operation screen 902 representing the state of the operator itself as shown in FIGS. 5 and 6 is displayed in a small size in the upper right corner of the monitor 111, and what kind of movement is currently performed in the system. It is possible to display whether it is determined to be an operation, and by showing a line graph 903 in which the movement of the hand is graphed, by making the operator himself / herself aware of the movement of the hand before and after, Accurate operation can be expected.
- gestures that can be used in the system can be displayed in the guidance, and the operator can be urged to input an operation following the gesture.
- an operator operates the input device such as a touch panel as if there is an input device on the basis of a virtual operation surface virtually formed in the space, thereby ensuring the operation content.
- the virtual operation surface until the hand or finger that is part of the operator reaches the virtual operation surface, that is, after the operator starts moving the hand or finger to perform some operation.
- the principle of such operation support is based on the operation of the operator on the monitor 111 in accordance with the movement of the position of the operator, for example, the hand or finger on the virtual operation surface.
- the operator is guided to enable accurate operation input.
- FIGS. 18 and 19 when the operator can operate at a certain standing position in advance, it is appropriate to operate the virtual operation surface at the standing position.
- the virtual operation surface 701 is formed at an appropriate position according to the position or the operator's standing position.
- an appropriate operation area 821 for the operator 820 is set. As described above, what kind of operation is currently being performed on the monitor 111 is shown in various forms so that the operator can recognize his / her own operation.
- FIG. 20 shows how the icon changes on the screen 2501 of the monitor 111 as a result of the above operation.
- a television program guide is displayed on the screen 2501 of the monitor 111, and an operation relating to a certain program is about to be performed.
- the operator wants to select the “change setting” menu button, the operator tries to select the menu 601 by projecting the finger 601 toward the monitor 111 as described above.
- an icon 2503 is displayed on the screen 2501. Since this finger is still far away, a relatively large icon on the right side of the icon shown in FIG. 19 is displayed.
- the icon approaches the target selection item “setting change”, becomes smaller and becomes a special icon when the icon 2502 has a certain size, and the finger crosses the virtual operation surface. It is determined that the item at the position pointed to is selected.
- the operator can see how the operation is recognized by the system. It is possible to grasp and intuitively recognize the position of the virtual operation surface and perform operations such as menu selection.
- the entire operator including the finger 601 and the arm 2401 and the position and size of each part can be extracted by using a three-dimensional camera, as in the overall image of the operator.
- the object in the screen can be grasped including the depth, the distance and the positional relationship with the virtual operation surface can be calculated based on the information.
- any method known in this technical field can be used for the three-dimensional camera used in the present embodiment, position extraction, distance calculation, and the like, description thereof is omitted here.
- the icons displayed on the screen are circular and change in size according to the operation of the operator.
- the present invention is not limited to this, and various forms of icons can be used as shown in FIG. it can. That is, referring to FIG. 21, (1) is an icon in the form of a finger, which is made smaller as it approaches the virtual operation surface as in the example of FIG. (2) shows a circular shape that gradually becomes smaller, but changes to a special shape when input or selection is confirmed, and is confirmed.
- the color of the icon can be changed instead of or together with the change in shape and size. For example, by changing from a cold color system to a warm color system such as blue, green, yellow, and red, the operator can intuitively recognize that the operation is focused and determined.
- (3) is a shape like X, and when it is far away, it is not only large but also blurred. As it gets closer, the size of the icon becomes smaller and the blur disappears and becomes a sharp shape.
- the size of the entire icon is not changed, and the figure drawn therein is recognized to change its shape and become focused. In this case, the color of the figure can also be changed.
- (5) shown in FIG. 21 also changes the shape. In FIG. 21, the shape and color of the icon change according to the movement of the finger. When the virtual operation surface is exceeded, the shape and color are changed or blinked as shown in a column 2601 at that moment. It is also possible to make the operator recognize that it has been determined as an operation. In addition, although not shown in the figure, as other icon changes, a change that is transparent at first and becomes opaque as the finger approaches the virtual operation surface is also effective.
- FIG. 23 is a diagram illustrating an example in which the color of the selected button is changed from a cold color system to a warm color system as the finger 601 approaches.
- FIG. 24 is a diagram illustrating an example of changing the fill density of the button.
- FIG. 26 A similar menu selection example is also shown in FIG. 26.
- the menu 4301 is displayed on the screen.
- a large icon 2610 is displayed on, for example, the item 4302 of the menu shown in FIG.
- the selection of the item 4302 is confirmed and a small icon 2611 is displayed to notify this.
- the finger 601 left and right and up and down the selected item in the menu moves, and when the desired item is stopped for a certain period of time, processing corresponding to the selected item can be performed.
- FIG. 31 also shows a menu when the finger 601 is in a certain area in front of the virtual operation surface 701 as in FIG. 26, but here is an example of video image control.
- the menu can be operated by the large icon 3110 and the small icon 3111 as in the example shown in FIG.
- FIG. 25 is a diagram illustrating an example display screen for inputting an instruction to move a graphic displayed on the screen according to the present embodiment.
- the instruction is given by moving the operator's hand or finger while touching the virtual operation surface.
- the icon is reduced from the icon 4201 on the screen 4211 to the icon 4202 on the screen 4212 to indicate that the user is approaching the virtual operation surface.
- the icon 4203 on the screen 4213 is changed and left as it is, for example, and if the finger is moved upward in that state, the rubber band 4204 etc. on the screen 4214 is displayed and moved.
- the operator can confirm his / her own operation. Further, when the finger is moved in the right direction, the rubber band 4205 of the screen 4215 can be displayed. In this way, a rubber band (arrow in the above figure) that expands and contracts according to the drag distance of the up / down / left / right after the finger or the like arrives on the virtual operation surface (the position of the icon 4203 is fixed until it comes off the virtual operation surface), The moving direction in the 3D space can be changed according to the moving speed and the extending angle according to the expansion / contraction distance (the tip of the arrow follows the movement of the arm tip or fingertip).
- the principle of this embodiment has been described in the case where the operator and the monitor are almost at the same height as shown in FIG. 18, that is, the virtual operation surface is formed substantially perpendicular to the front surface in the horizontal direction of the operator.
- this principle is not affected by the positional relationship and shape between the operator and the monitor, and various arrangements and configurations are possible.
- the present invention can be applied to a system arrangement as shown in FIGS.
- the three-dimensional camera 201 is also tilted together with the monitor 111, there is basically no significant difference from the case where the three-dimensional camera 201 is disposed at the horizontal position described above.
- By performing position correction or the like by any of the methods known in (1) it is possible to determine the operation by calculating the positional relationship between the operator's part and the virtual operation surface.
- the operator can operate as if there is an input device such as a touch panel on the basis of a virtual operation surface virtually formed in the space, thereby ensuring the operation content.
- the content of the operation thus determined is determined from the virtual operation surface in the direction away from the virtual operation surface and the body such as the operator's hand in the direction away from the operator. It is determined by the positional relationship of some or worn objects.
- the operation area is set as a virtual operation hierarchy in two or three layers in the z-axis direction, which is the direction away from the operator, and the type of operation is determined according to which layer the operator's hand is in,
- the operation content is determined by the movement of the hand in the layer.
- the operator can more easily recognize the operation.
- the distance in the z direction between a part of the operator and the plane dividing each layer can be obtained by a method of calculating the distance between the formed virtual operation surface and a part of the operator.
- the trigger surface 701 shown in FIG. 27 is a virtual operation surface of the present embodiment.
- the operation area ahead of the trigger plane 701 is divided into three levels A to C by planes 4501 and 4502, and different types of operations are assigned to each.
- an object rotation operation is assigned to the hierarchy A
- an enlargement / reduction operation is assigned to the hierarchy B
- an object movement operation is assigned to the hierarchy C.
- An operation determined by moving the finger 601 in each layer is executed.
- an icon indicating the finger 601 when the finger 601 passes through the trigger surface 701, an icon indicating the finger 601, for example, an object designated around the position indicated by the rotation icon 4503 rotates in accordance with the movement of the finger 601.
- an enlargement / reduction icon 4504 is displayed on the monitor 111, and the object is enlarged when the finger 601 is moved in the z direction, and the object is reduced when the finger 601 is moved in the opposite direction.
- a movement icon 4505 can be displayed at the position of the finger 601 on the designated object displayed on the monitor 111, and can be moved in accordance with the movement of the finger 601.
- the planes 4501 and 4502 separating the layers can be arranged so that each layer has the same thickness, or can be arranged so as to have different thicknesses according to the operation type assigned to the layer.
- an operation of enlargement / reduction is assigned to the hierarchy B.
- the enlargement / reduction must be expressed by a forward / backward movement, it is usually in the z direction compared to the hierarchy A and hierarchy C. Since the movement becomes large, the operation can be facilitated by making the layer B thicker.
- FIG. 28 is a diagram showing an example of another icon of the present embodiment.
- the operation for specifying the operation position on the monitor 111 is performed for the hierarchy A
- the operation for “grabbing” the object at the specified position is performed for the hierarchy B
- the grasped object is thrown for the hierarchy C. Or each operation to move is allocated.
- a multi-sensing state in which a hand opposite to the one to be operated is inserted into the operation area is set to no operation (or vice versa).
- it is determined whether or not to operate each level by putting in and out the hand opposite to the one to be operated in this example, depending on the two-hand operation, but on the XY plane)
- Various methods such as providing a slip-through area are conceivable).
- the operator can operate the system by the movement without memorizing or arranging the gesture in advance, and the posture of the operator and each part, for example, the movement of the hand
- so-called mixed reality (MR) can be realized in a game using the whole body.
- the present embodiment is basically the same as the system configuration of the first embodiment described above except for the operation surface formation reference. That is, in the present embodiment, based on the system and processing of the first embodiment, by introducing the concept of an operation surface formation standard like a certain marker 101 that can be perceived by the operator as shown in FIG. The operator can easily recognize the virtual operation surface using this as a mark. That is, the marker 101 shown in FIG. 14 or the like is an operation surface formation reference for the operator 102 to recognize the virtual operation surface. As shown in FIG. 16, the user 102 has the marker 101 shown on the floor surface.
- the horizontal width of the marker 101 can be the width of the operation surface.
- the front and back of the marker 101 can be distinguished by an auxiliary marker or the like, the operation area can be determined by using the auxiliary marker, and a three-dimensional perspective calculation element can be used. It is also possible to indicate an area suitable for the above.
- an operation surface 701 is virtually formed on the upper portion of the marker 101 as shown in FIG. 16, and the operator 102 moves the virtual operation surface 701 from the marker 101. It is possible to easily perform an input operation by projecting the hand 601 or moving the hand 601 so that a part of the screen and the operation surface 701 are touched as if touching the touch panel in conjunction with the monitor 111. Can do.
- the action can be determined by pushing (decision) pushing forward in the line segment, or it can be used as a criterion for the decision to push out after deciding the operation. It is easy for the user to recognize and the operability is close to that of a conventional touch panel operation.
- the virtual operation surface is shown to be formed vertically right above the marker.
- Only the bottom of the virtual operation surface can be tilted according to the operation surface formation reference, or the position where the virtual operation surface is formed can be changed according to the height.
- a fixed operation surface may be first calculated by the marker 101 and then adjusted by an operator's image so that the virtual operation surface is formed at an appropriate position.
- the operation surface is calculated based on the measured position of the marker 101 and the positions of the monitor 111 and the camera 201 set in advance, and the height, arm length, etc. are extracted from the operator's image.
- the position, size, angle, and the like of the virtual operation surface can be corrected in consideration of these pieces of information.
- the marker that is the operation surface formation reference can be visually recognized, and the operator visually recognizes the marker and uses the marker as a reference, and operates by estimating the position where the virtual operation surface exists. Therefore, the virtual operation surface needs to be formed above the marker, but the positional relationship before and after the operator may change depending on the situation of the operator and the entire system. In general, as shown in FIG. 27, for example, when a marker 4401 is arranged on the floor or the like, it is likely that the operator 102 often stands at a position immediately above the marker 4401 from the position of the eyes of the operator 102.
- the operation region including the virtual operation surface is set in consideration of the arm stroke and the like.
- the markers in various ways, more objectively, that is, what Even if there is an operator, the operation area can be determined so that it can be recognized with a certain degree of accuracy.
- the operation surface formation standard as in this embodiment can disperse and arrange the measurement markers over a wide range on the captured screen at an appropriate time, measurement with very high reliability is possible.
- it can be used in combination with a calibration system that guarantees that the marker is always within the shooting range of the camera, thus realizing a space-saving and multifunctional device. There is no need to remeasure every time after initial installation calibration.
- the marker 101 is photographed by the video camera 201 and becomes an operation surface formation reference.
- various kinds of marker materials known in this technical field can be used.
- the appropriate camera is selected depending on the camera to be used. For example, in the case of a normal camera, a characteristic coloring that stands out from the background color is necessary, and when an infrared camera is used, a retroreflective material or the like can be used.
- a black bar or the like is used without using a marker or a retroreflective material with the laser beam. As a result, the portion irradiated with the laser light is not reflected and a defect occurs on the screen, so that the position of the bar can be detected in this way as well.
- the marker when a marker is attached by a certain coloring, the marker can be extracted by specifically processing as follows.
- the image reading unit 301 data captured by the video camera 201 is read, and in the case of, for example, a color image of the marker 101, a color region previously determined as a marker by the image extracting unit 302 is extracted from the data. Extract only images.
- upper and lower threshold values are set for each of the luminance signal Y and the color difference signals U and V of the color NTSC signal, and pixels that satisfy all the threshold values are extracted. Any method known in the art can be used. In this way, the position of the marker 101 is grasped three-dimensionally, what the virtual operation surface looks like is calculated and stored in the database.
- a marker when calculating a distortion and a scale, can be provided at at least four points or more as a reference. For example, if there are four or more reference points, they can be connected to form a line segment for calibration.
- the marker can be used by sticking an appropriate material to the floor surface in this way, but it is not limited to this and can be applied directly to the floor surface or using any attachment method known in the art. Can be.
- the marker 101 is used as the operation surface formation reference.
- the present invention is not limited to this, and any member or structure can be used as the three-dimensional measurement reference.
- the marker may be a figure of various shapes instead of the shape shown in FIG. 1, and a plurality of markers having a certain area may be provided at some points.
- a virtual operation surface 701 is formed by attaching markers 1902 and 1903 to a three-dimensional object, for example, a desk-shaped three-dimensional object 1901 shown in FIG.
- an input operation can be performed by performing an operation with a finger 601 or the like.
- the shape of the virtual operation surface is a rectangle standing vertically from the floor surface.
- the shape is not limited to this, and the operation surface has various shapes and sizes depending on the shape and arrangement of the marker 101. Can be formed. For example, since the marker 101 shown in FIG.
- auxiliary markers can be arranged three-dimensionally to make a sloped operation surface having a certain angle with respect to the floor surface, or a curved operation surface.
- the processing is described based on a virtual operation surface formed by a marker or the like.
- a virtual operation surface formed by a marker or the like.
- any of the above is a method in which the user can visually recognize the position after calibration, and is replaced with another means (three-dimensional or ⁇ plane) leading to movement restriction. Furthermore, without relying on the calibration method only on the camera side, a reference plane is set at a distance and position that is easy to use in advance, and then a floor line or a solid guide is installed afterwards on the plane (area) so that the user can recognize it. .
- a marker is basically attached to the edge of a desk or table, and the operator touches the virtual operation surface formed above the marker or moves the hand to perform input operations to the system. Recognize that there is. At this time, the edge of the desk or table that is not provided with a marker restricts the operation of the operator, and supports a moderately held hand to naturally touch the virtual operation surface. This concept will be described with reference to FIG. 38.
- a virtual operation surface 701 is formed above the marker 4402 serving as an operation surface forming unit.
- the operator 102 is fixed from the virtual operation surface by some kind of operation restriction unit 4401.
- the virtual operation surface can be operated with the hand 601 that has been put forward naturally.
- the virtual operation surface 701 is formed immediately above the marker 4402.
- the virtual operation surface 701 can be moved back and forth with the marker 4402 as a reference.
- the operation restriction unit 4401 is basically fixed, depending on the body shape of the operator 102, if a virtual operation surface is formed directly above the marker 4402, it may be too close or too deep to be unusable. There is also a possibility. In this case, the position where the virtual operation surface is formed can be moved back and forth from the marker 4402 for each operator.
- the virtual operation surface is formed based on the operation surface formation standard that can be perceived by the operator and the image of the operator himself / herself captured by the three-dimensional camera. Since it is easy to specify and the height of the operator is taken into account, the operator can obtain a natural operation feeling without a sense of incongruity.
- the present embodiment is basically the same as the system configuration of the first and second embodiments described above, except that a projector is used for display instead of a monitor. That is, in this embodiment, the processing is basically the same as in the first and second embodiments, but instead of the monitor 111 such as an LCD or plasma, an image is projected from the projector 3011 to the screen 3010 as shown in FIG. Various information is notified to the operator by projecting. In the system of the present embodiment, only the screen is disposed on the display surface on which the LCD or the like is disposed in the first embodiment or the like. Therefore, the projector 3011 that projects the image, the camera 201, and the computer that controls these are shown in FIG. As shown in FIG.
- a guide bar 3012 is placed to recognize the entry prohibition area, and this is used as in the second embodiment. It can also be used as a standard for forming an operation surface.
- This embodiment is different from the first embodiment only in the display method, and the display surface itself is not greatly different. Therefore, the setting of the virtual operation surface and the operation region, the operation determination processing, etc. are basically the first or This is the same as in the second embodiment.
- the projector, the camera, and the computer are integrated, and are arranged between the operator and the display surface (screen 3010). Therefore, the position of the camera 201 is slightly different, and the camera is positioned below the display surface.
- the adjustment range of the angle of the operation region or the like becomes larger.
- the positional relationship between the guide bar 3012 and the virtual operation surface 701 is different from the case described in the second embodiment, and the virtual operation surface 701 is not always formed directly above the guide bar 3012.
- the guide bar 3012 according to the present embodiment which serves as an intrusion prevention, and the case where it is consciously drawn on the floor like a certain marker 101 that can be perceived by the operator as shown in FIG. This is because although the role as the operation surface formation reference is the same, the position where the virtual operation surface is formed differs depending on the positional relationship with the operator and the relationship with the operator.
- a virtual operation surface can be formed on the back side or the near side based on the guide bar 3012 according to the system.
- a projector, a camera, and a computer can be integrated by using a projector for display, installation and handling are easy, and a large LCD is used when the screen is large. This is advantageous in terms of ease of installation and cost.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
図1は、本実施形態の操作入力システムの一例を示す図である。本実施形態のモニタ111は、操作者102の前面に配置され、操作者102は、モニタ111との間の一定の位置に仮想的な操作面があると考えて、指などの形状が操作判定の対象になることを意識して操作入力システムに対し操作を行うことができる。モニタ111には、本システムが目的とする様々なアプリケーション用の種々の映像が表示されるが、これに加え後述するように操作入力を支援、すなわち例えば対象となる操作者102の部位などを画面の隅の方に表示して、操作者102に、現時点で操作として判定されうる動作を認識させることができる。操作者102の動きはビデオカメラ201により撮影され、撮影された映像はコンピュータ110で処理され、操作者102の位置、身長および腕の長さなどにより、あるいは身長や肩幅等の身体寸法情報により最適な仮想操作面およびこれを含む操作領域の位置および大きさを設定し、仮想操作面からモニタ111側に出た部分のジェスチャがどのような操作を意味するかを判定する。すなわち、コンピュータ110は、ビデオカメラ201から得られたデータから、操作者102の立体画像を作成するとともに、仮想操作面の位置を算出し、さらに後述するビデオカメラ201やモニタ111の位置や配置態様により仮想操作面の位置および大きさなどを調整し、仮想操作面を基準に操作者102の手指などがビデオカメラ201側に出ているか否かを決定し、その部分を操作の対象として操作内容を判定する。
本実施形態では、図6に示すように、ビデオカメラ201で撮影された操作者102の像に基づいて仮想操作面を形成し、同じく撮影した操作者102の一部である手や指の位置を定めるとともに、仮想的な操作面701と操作者102の手指601との位置関係を算出する処理を行う。本実施形態では、このような処理を行う前提として、本技術分野で知られた初期設定、例えば本実施形態の画像認識装置が新たに設置された場合を想定すると、事前準備として利用するビデオカメラ201の利用レンズの歪み、モニタ111とレンズとの距離等の情報を装置に入力しておく必要がある。さらに閾値設定等を予め調整しておく。システムの初期設定が終了すると、本実施形態の処理を行うこととなるが、この処理については図4を参照して以下に説明する。
以上説明したように、単に三次元ビデオカメラにより仮想操作面を形成するだけで、操作者は空間上にタッチパネルのような操作面を認識することができ、この操作面に対し、種々の操作を行うことにより、身体の全部または一部を用いた操作入力が可能となるが、さらに仮想的な操作面に対する操作者の映像をモニタ111に表示する等、操作入力を支援することにより、より容易に本実施形態のシステムを活用することができる。
本実施形態では、操作者が空間上に仮想的に形成された仮想操作面を基準に、そこにあたかもタッチパネルのような入力機器が存在するかのように操作することにより、その操作内容を確実に判定しようとするものであるが、操作者の一部である手または指などが仮想操作面に至るまで、つまり操作者が何らかの操作を実行しようと手または指を動かし始めてから、仮想操作面を押下するまでの間も操作支援することにより、さらに操作入力を容易に、より高精度に行うようにすることができる。
本実施形態では、操作者が空間上に仮想的に形成された仮想操作面を基準に、そこにあたかもタッチパネルのような入力機器が存在するかのように操作することにより、その操作内容を確実に判定しようとするものであるが、このようにして判定された操作の内容を仮想操作面から奥側方向である、操作者から離れる方向への仮想操作面と操作者の手等の身体の一部あるいは身につけた物体の位置関係で決定する。例えば、操作者から離れる方向であるz軸方向に2層あるいは3層に操作領域を仮想操作階層として設定し、操作者の手がどの層に入っているかにより操作の種別を決定して、その層内での手の動きにより操作内容を決定する。この際、操作者が視認している表示画面上に手の位置や操作の種別などを表示させれば、より操作者は操作の認識を容易にすることができる。なお、操作者の一部と各階層を分割する面とのz方向の距離は、上述した形成された仮想操作面と操作者の一部との距離を算出する手法によって取得することができる。
本実施形態は、操作面形成基準を除き基本的に上述の第1実施形態のシステム構成と同様である。すなわち、本実施形態では、第1実施形態のシステムおよび処理を踏まえ、図14に示すように操作者にも知覚可能な一定のマーカ101のような操作面形成基準と言う概念を導入することにより、操作者はこれを目印に仮想操作面の認識がより容易になる。すなわち、図14等に示すマーカ101は、操作者102が仮想操作面を認識するための操作面形成基準であり、ユーザ102は、図16に示すように、床面に示されたマーカ101の上方に操作面701が仮想的に存在すると捉えて種々の操作をして、マーカ101を基準に手601を前に突き出してジェスチャを示したりすることができる。マーカ101の横幅は操作面の幅とすることもできる。また、補助マーカ等により、マーカ101の前後を区別したり、補助マーカを用いて、操作領域を確定したり、3次元パース計算要素としたりすることもでき、形状や方向も自由であり、測定に適したエリアを示すようにしても良い。
本実施形態は、基本的に上述の第1および第2実施形態のシステム構成と同様であるが、モニタの替わりに表示用としてプロジェクタを用いた点が異なる。すなわち、本実施形態では、第1および2実施形態とその処理は基本的に同じであるが、LCDやプラズマなどのモニタ111の替わりに、図30に示すようにプロジェクタ3011から映像をスクリーン3010に投影することによって操作者に様々な情報を通知する。本実施形態のシステムでは、第1実施形態などでLCD等を配置している表示面にはスクリーンだけが配置されるので、映像を投影するプロジェクタ3011、カメラ201およびこれらを制御するコンピュータは図30に示すように一体型にすることができる。このような一体型のシステムは通常操作者とスクリーンとの間におかれるので、例えば図に示すように進入禁止領域を認識させるためにガイドバー3012が置かれ、これを第2実施形態のような操作面形成基準として流用することもできる。
Claims (13)
- 操作者の像を読取って立体画像データを生成する三次元撮像手段と、
前記三次元撮像手段により読取られた操作者の像に基づいて、仮想操作面を形成する操作面形成手段と、
当該形成された仮想操作面に対する操作者の少なくとも一部の像の動きを、前記三次元撮像手段で読取って、前記操作者の一部と前記仮想操作面との位置関係に基づいて該動きが操作であるか否かを判定する操作判定手段と、
前記動きが操作であると判定されると、所定の信号を出力する信号出力手段と
を備えたことを特徴とする画像認識装置。 - 前記操作判定手段は、前記操作者の一部が前記仮想操作面よりも前記三次元撮像手段側にあるとき操作であると判定することを特徴とする請求項1に記載の画像認識装置。
- 前記操作判定手段は、前記操作者の一部の、前記仮想操作面よりも前記三次元撮像手段側にある部分の形状または動きによりいずれの操作が行われているかを判定することを特徴とする請求項1または2に記載の画像認識装置。
- 前記操作判定手段は、予め操作者の一部の形状または動きと対応付けた操作内容を格納する記憶手段を検索して、合致する形状または動きに対応する操作を、入力する操作と判定することを特徴とする請求項3に記載の画像認識装置。
- 操作者と対面して配置された画像表示手段をさらに備え、
前記操操作判定手段は、操作者が操作の判定結果を認識できるように、前記画像表示手段に現時点の操作判定結果を表示させることを特徴とする請求項1ないし4のいずれかに記載の画像認識装置。 - 操作者と対面して配置された画像表示手段をさらに備え、
前記仮想操作階層の領域内で前記操作者の動きが読取られると、前記画像表示手段に該仮想操作階層に予め割当てられた標示を表示することを特徴とする請求項1ないし4のいずれかに記載の画像認識装置。 - 前記操作面形成手段により形成される仮想操作面に対し、前記三次元撮像手段の反対側における前記操作者の一部と前記仮想操作面との位置関係から当該距離を算出して該距離に応じ変化する標示を表示させて、判定しようとする操作を示す前記操作者が視認可能な画像表示手段
を備えたことを特徴とする請求項1ないし4のいずれかに記載の画像認識装置。 - 前記画像表示手段は、前記操作者の一部が前記仮想操作面に対し前記三次元撮像手段側にあるときは該標示の変化を停止させて判定される操作を示すことを特徴とする請求項7に記載の画像認識装置。
- 前記仮想操作面との位置関係に基づいて定められる2つ以上の仮想操作階層のいずれかの領域内で前記操作者の動きが読取られると、該仮想操作階層に予め割当てられた操作種別および前記仮想操作階層内での操作者の動きに基づいて前記操作の内容を決定する操作内容決定手段
を備えたことを特徴とする請求項1ないし8のいずれかに記載の画像認識装置。 - 前記操作面形成手段は、前記操作者の上半身の位置情報に応じた位置に前記仮想操作面を形成することを特徴とする請求項1ないし9のいずれかに記載の画像認識装置。
- 前記操作面形成手段は、前記画像表示手段の位置に基づいて前記仮想操作面の位置および角度を調整することを特徴とする請求項1ないし10のいずれかに記載の画像認識装置。
- 画像認識装置によって、操作者の画像を認識して操作内容を判定する操作判定方法であって、
操作者の像を読取って立体画像データを生成する三次元撮像ステップと、
前記三次元撮像手段により読取られた操作者の像に基づいて、仮想操作面を形成する操作面形成ステップと、
当該形成された仮想操作面に対する操作者の少なくとも一部の像の動きを、前記三次元撮像手段で読取って、前記操作者の一部と前記仮想操作面との位置関係に基づいて該動きが操作であるか否かを判定する操作判定ステップと、
前記動きが操作であると判定されると、所定の信号を出力する信号出力ステップと
を備えたことを特徴とする操作判定方法。 - 画像認識装置に、操作者の画像を認識して操作内容を判定する操作判定方法を実行させるプログラムであって、該操作判定方法は、
操作者の像を読取って立体画像データを生成する三次元撮像ステップと、
前記三次元撮像手段により読取られた操作者の像に基づいて、仮想操作面を形成する操作面形成ステップと、
当該形成された仮想操作面に対する操作者の少なくとも一部の像の動きを、前記三次元撮像手段で読取って、前記操作者の一部と前記仮想操作面との位置関係に基づいて該動きが操作であるか否かを判定する操作判定ステップと、
前記動きが操作であると判定されると、所定の信号を出力する信号出力ステップと
を備えたことを特徴とするプログラム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/384,682 US8890809B2 (en) | 2009-08-12 | 2010-08-12 | Image recognition apparatus, operation determining method and computer-readable medium |
EP10808086.2A EP2466423B1 (en) | 2009-08-12 | 2010-08-12 | Image recognition apparatus, operation determining method, and program |
CN201080035693.8A CN102473041B (zh) | 2009-08-12 | 2010-08-12 | 图像识别装置、操作判断方法以及程序 |
CA2768893A CA2768893C (en) | 2009-08-12 | 2010-08-12 | Image recognition apparatus, operation determining method and program |
KR1020127001932A KR101347232B1 (ko) | 2009-08-12 | 2010-08-12 | 화상인식장치 및 조작판정방법, 그리고 컴퓨터 판독가능한 매체 |
US14/522,087 US9535512B2 (en) | 2009-08-12 | 2014-10-23 | Image recognition apparatus, operation determining method and computer-readable medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009187449A JP4701424B2 (ja) | 2009-08-12 | 2009-08-12 | 画像認識装置および操作判定方法並びにプログラム |
JP2009-187449 | 2009-08-12 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/384,682 A-371-Of-International US8890809B2 (en) | 2009-08-12 | 2010-08-12 | Image recognition apparatus, operation determining method and computer-readable medium |
US14/522,087 Continuation US9535512B2 (en) | 2009-08-12 | 2014-10-23 | Image recognition apparatus, operation determining method and computer-readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011018901A1 true WO2011018901A1 (ja) | 2011-02-17 |
Family
ID=43586084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/005058 WO2011018901A1 (ja) | 2009-08-12 | 2010-08-12 | 画像認識装置および操作判定方法並びにプログラム |
Country Status (7)
Country | Link |
---|---|
US (2) | US8890809B2 (ja) |
EP (1) | EP2466423B1 (ja) |
JP (1) | JP4701424B2 (ja) |
KR (1) | KR101347232B1 (ja) |
CN (2) | CN104615242A (ja) |
CA (2) | CA2768893C (ja) |
WO (1) | WO2011018901A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013046768A1 (ja) * | 2011-09-30 | 2013-04-04 | 楽天株式会社 | 検索装置、検索方法、記録媒体、ならびに、プログラム |
EP2610714A1 (en) * | 2012-01-02 | 2013-07-03 | Alcatel Lucent International | Depth camera enabled pointing behavior |
CN104012073A (zh) * | 2011-12-16 | 2014-08-27 | 奥林巴斯映像株式会社 | 拍摄装置及其拍摄方法、存储能够由计算机来处理的追踪程序的存储介质 |
US11036351B2 (en) | 2017-08-04 | 2021-06-15 | Sony Corporation | Information processing device and information processing method |
US20220334648A1 (en) * | 2021-04-15 | 2022-10-20 | Canon Kabushiki Kaisha | Wearable information terminal, control method thereof, and storage medium |
Families Citing this family (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4701424B2 (ja) * | 2009-08-12 | 2011-06-15 | 島根県 | 画像認識装置および操作判定方法並びにプログラム |
US8639020B1 (en) | 2010-06-16 | 2014-01-28 | Intel Corporation | Method and system for modeling subjects from a depth map |
GB2488785A (en) * | 2011-03-07 | 2012-09-12 | Sharp Kk | A method of user interaction with a device in which a cursor position is calculated using information from tracking part of the user (face) and an object |
GB2488784A (en) * | 2011-03-07 | 2012-09-12 | Sharp Kk | A method for user interaction of the device in which a template is generated from an object |
JP5864043B2 (ja) * | 2011-04-12 | 2016-02-17 | シャープ株式会社 | 表示装置、操作入力方法、操作入力プログラム、及び記録媒体 |
JP2012252386A (ja) * | 2011-05-31 | 2012-12-20 | Ntt Docomo Inc | 表示装置 |
US11048333B2 (en) | 2011-06-23 | 2021-06-29 | Intel Corporation | System and method for close-range movement tracking |
JP6074170B2 (ja) | 2011-06-23 | 2017-02-01 | インテル・コーポレーション | 近距離動作のトラッキングのシステムおよび方法 |
JP5921835B2 (ja) * | 2011-08-23 | 2016-05-24 | 日立マクセル株式会社 | 入力装置 |
EP2795430A4 (en) * | 2011-12-23 | 2015-08-19 | Intel Ip Corp | TRANSITION MECHANISM FOR A COMPUTER SYSTEM WITH USER DETECTION |
WO2013095679A1 (en) * | 2011-12-23 | 2013-06-27 | Intel Corporation | Computing system utilizing coordinated two-hand command gestures |
WO2013095677A1 (en) | 2011-12-23 | 2013-06-27 | Intel Corporation | Computing system utilizing three-dimensional manipulation command gestures |
US10345911B2 (en) | 2011-12-23 | 2019-07-09 | Intel Corporation | Mechanism to provide visual feedback regarding computing system command gestures |
JP2013134549A (ja) * | 2011-12-26 | 2013-07-08 | Sharp Corp | データ入力装置およびデータ入力方法 |
JP2013132371A (ja) * | 2011-12-26 | 2013-07-08 | Denso Corp | 動作検出装置 |
US9222767B2 (en) | 2012-01-03 | 2015-12-29 | Samsung Electronics Co., Ltd. | Display apparatus and method for estimating depth |
JP5586641B2 (ja) * | 2012-02-24 | 2014-09-10 | 東芝テック株式会社 | 商品読取装置及び商品読取プログラム |
US20130239041A1 (en) * | 2012-03-06 | 2013-09-12 | Sony Corporation | Gesture control techniques for use with displayed virtual keyboards |
US9477303B2 (en) | 2012-04-09 | 2016-10-25 | Intel Corporation | System and method for combining three-dimensional tracking with a three-dimensional display for a user interface |
KR101424562B1 (ko) * | 2012-06-11 | 2014-07-31 | 한국과학기술원 | 공간 인식 장치, 이의 동작 방법 및 이를 포함하는 시스템 |
JP2014002502A (ja) * | 2012-06-18 | 2014-01-09 | Dainippon Printing Co Ltd | 手のばし検出装置、手のばし検出方法及びプログラム |
JP5654526B2 (ja) * | 2012-06-19 | 2015-01-14 | 株式会社東芝 | 情報処理装置、キャリブレーション方法及びプログラム |
JP2014029656A (ja) * | 2012-06-27 | 2014-02-13 | Soka Univ | 画像処理装置および画像処理方法 |
JP5921981B2 (ja) * | 2012-07-25 | 2016-05-24 | 日立マクセル株式会社 | 映像表示装置および映像表示方法 |
US20140123077A1 (en) * | 2012-10-29 | 2014-05-01 | Intel Corporation | System and method for user interaction and control of electronic devices |
US10063757B2 (en) * | 2012-11-21 | 2018-08-28 | Infineon Technologies Ag | Dynamic conservation of imaging power |
CN104813258B (zh) * | 2012-11-22 | 2017-11-10 | 夏普株式会社 | 数据输入装置 |
JP5950806B2 (ja) * | 2012-12-06 | 2016-07-13 | 三菱電機株式会社 | 入力装置、情報処理方法、及び情報処理プログラム |
US20140340498A1 (en) * | 2012-12-20 | 2014-11-20 | Google Inc. | Using distance between objects in touchless gestural interfaces |
JP6167529B2 (ja) * | 2013-01-16 | 2017-07-26 | 株式会社リコー | 画像投影装置、画像投影システム、制御方法およびプログラム |
JP6029478B2 (ja) * | 2013-01-30 | 2016-11-24 | 三菱電機株式会社 | 入力装置、情報処理方法、及び情報処理プログラム |
JP5950845B2 (ja) * | 2013-02-07 | 2016-07-13 | 三菱電機株式会社 | 入力装置、情報処理方法、及び情報処理プログラム |
JP2018088259A (ja) * | 2013-03-05 | 2018-06-07 | 株式会社リコー | 画像投影装置、システム、画像投影方法およびプログラム |
US9519351B2 (en) * | 2013-03-08 | 2016-12-13 | Google Inc. | Providing a gesture-based interface |
JP6044426B2 (ja) * | 2013-04-02 | 2016-12-14 | 富士通株式会社 | 情報操作表示システム、表示プログラム及び表示方法 |
JP6207240B2 (ja) * | 2013-06-05 | 2017-10-04 | キヤノン株式会社 | 情報処理装置及びその制御方法 |
WO2015002420A1 (ko) * | 2013-07-02 | 2015-01-08 | (주) 리얼밸류 | 휴대용 단말기의 제어방법, 이를 실행하기 위한 프로그램을 저장한 기록매체, 애플리케이션 배포서버 및 휴대용 단말기 |
WO2015002421A1 (ko) * | 2013-07-02 | 2015-01-08 | (주) 리얼밸류 | 휴대용 단말기의 제어방법, 이를 실행하기 위한 프로그램을 저장한 기록매체, 애플리케이션 배포서버 및 휴대용 단말기 |
JP6248462B2 (ja) * | 2013-08-08 | 2017-12-20 | 富士ゼロックス株式会社 | 情報処理装置及びプログラム |
KR102166330B1 (ko) * | 2013-08-23 | 2020-10-15 | 삼성메디슨 주식회사 | 의료 진단 장치의 사용자 인터페이스 제공 방법 및 장치 |
JP6213193B2 (ja) * | 2013-11-29 | 2017-10-18 | 富士通株式会社 | 動作判定方法及び動作判定装置 |
JP6222830B2 (ja) * | 2013-12-27 | 2017-11-01 | マクセルホールディングス株式会社 | 画像投射装置 |
CN103713823B (zh) * | 2013-12-30 | 2017-12-26 | 深圳泰山体育科技股份有限公司 | 实时更新操作区位置的方法及系统 |
EP2916209B1 (en) * | 2014-03-03 | 2019-11-20 | Nokia Technologies Oy | Input axis between an apparatus and a separate apparatus |
US20150323999A1 (en) * | 2014-05-12 | 2015-11-12 | Shimane Prefectural Government | Information input device and information input method |
KR101601951B1 (ko) * | 2014-09-29 | 2016-03-09 | 주식회사 토비스 | 공간 터치 입력이 수행되는 곡면디스플레이 장치 |
WO2016103522A1 (ja) | 2014-12-26 | 2016-06-30 | 株式会社ニコン | 制御装置、電子機器、制御方法およびプログラム |
JP6460094B2 (ja) | 2014-12-26 | 2019-01-30 | 株式会社ニコン | 検出装置、空中像制御装置、検出方法および検出プログラム |
EP3239816A4 (en) | 2014-12-26 | 2018-07-25 | Nikon Corporation | Detection device, electronic instrument, detection method, and program |
US9984519B2 (en) | 2015-04-10 | 2018-05-29 | Google Llc | Method and system for optical user recognition |
CN104765459B (zh) * | 2015-04-23 | 2018-02-06 | 无锡天脉聚源传媒科技有限公司 | 虚拟操作的实现方法及装置 |
CN104866096B (zh) * | 2015-05-18 | 2018-01-05 | 中国科学院软件研究所 | 一种利用上臂伸展信息进行命令选择的方法 |
CN104978033A (zh) * | 2015-07-08 | 2015-10-14 | 北京百马科技有限公司 | 一种人机交互设备 |
KR101685523B1 (ko) * | 2015-10-14 | 2016-12-14 | 세종대학교산학협력단 | 사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 인터페이스 방법 및 그 시스템 |
KR101717375B1 (ko) * | 2015-10-21 | 2017-03-17 | 세종대학교산학협력단 | 가상 모니터 기반의 핸드 마우스를 이용한 게임 인터페이스 방법 및 그 시스템 |
US10216405B2 (en) * | 2015-10-24 | 2019-02-26 | Microsoft Technology Licensing, Llc | Presenting control interface based on multi-input command |
CN105404384A (zh) * | 2015-11-02 | 2016-03-16 | 深圳奥比中光科技有限公司 | 手势操作方法、利用手势定位屏幕光标的方法及手势系统 |
US10610133B2 (en) | 2015-11-05 | 2020-04-07 | Google Llc | Using active IR sensor to monitor sleep |
JP6569496B2 (ja) * | 2015-11-26 | 2019-09-04 | 富士通株式会社 | 入力装置、入力方法、及びプログラム |
US10289206B2 (en) * | 2015-12-18 | 2019-05-14 | Intel Corporation | Free-form drawing and health applications |
WO2018003862A1 (ja) * | 2016-06-28 | 2018-01-04 | 株式会社ニコン | 制御装置、表示装置、プログラムおよび検出方法 |
JP6230666B2 (ja) * | 2016-06-30 | 2017-11-15 | シャープ株式会社 | データ入力装置、データ入力方法、及びデータ入力プログラム |
US20180024623A1 (en) * | 2016-07-22 | 2018-01-25 | Google Inc. | Detecting user range of motion for virtual reality user interfaces |
WO2018083737A1 (ja) * | 2016-11-01 | 2018-05-11 | マクセル株式会社 | 表示装置及び遠隔操作制御装置 |
JP6246310B1 (ja) | 2016-12-22 | 2017-12-13 | 株式会社コロプラ | 仮想空間を提供するための方法、プログラム、および、装置 |
CN108345377A (zh) * | 2017-01-25 | 2018-07-31 | 武汉仁光科技有限公司 | 一种基于Kinect的自适应用户身高的交互方法 |
KR101821522B1 (ko) * | 2017-02-08 | 2018-03-08 | 윤일식 | 모니터를 이용한 엘리베이터 동작제어장치 및 방법 |
KR102610690B1 (ko) * | 2017-02-24 | 2023-12-07 | 한국전자통신연구원 | 외곽선과 궤적 표현을 활용한 사용자 입력 표현 장치 및 그 방법 |
CN107015644B (zh) * | 2017-03-22 | 2019-12-31 | 腾讯科技(深圳)有限公司 | 虚拟场景中游标的位置调节方法及装置 |
KR101968547B1 (ko) * | 2017-07-17 | 2019-04-12 | 주식회사 브이터치 | 객체 제어를 지원하기 위한 방법, 시스템 및 비일시성의 컴퓨터 판독 가능 기록 매체 |
CN107861403B (zh) * | 2017-09-19 | 2020-11-13 | 珠海格力电器股份有限公司 | 一种电器的按键锁定控制方法、装置、存储介质及电器 |
JP7017675B2 (ja) * | 2018-02-15 | 2022-02-09 | 有限会社ワタナベエレクトロニクス | 非接触入力システム、方法およびプログラム |
US20200012350A1 (en) * | 2018-07-08 | 2020-01-09 | Youspace, Inc. | Systems and methods for refined gesture recognition |
JP7058198B2 (ja) * | 2018-08-21 | 2022-04-21 | グリー株式会社 | 画像表示システム、画像表示方法及び画像表示プログラム |
WO2020170105A1 (en) | 2019-02-18 | 2020-08-27 | Purple Tambourine Limited | Interacting with a smart device using a pointing controller |
WO2021184356A1 (en) | 2020-03-20 | 2021-09-23 | Huawei Technologies Co., Ltd. | Methods and systems for hand gesture-based control of a device |
EP4115264A4 (en) * | 2020-03-23 | 2023-04-12 | Huawei Technologies Co., Ltd. | METHODS AND SYSTEMS FOR CONTROLLING A DEVICE BASED ON HAND GESTURES |
CN111880657B (zh) * | 2020-07-30 | 2023-04-11 | 北京市商汤科技开发有限公司 | 一种虚拟对象的控制方法、装置、电子设备及存储介质 |
JP7041211B2 (ja) * | 2020-08-03 | 2022-03-23 | パラマウントベッド株式会社 | 画像表示制御装置、画像表示システム及びプログラム |
TWI813907B (zh) * | 2020-09-30 | 2023-09-01 | 優派國際股份有限公司 | 觸控顯示裝置及其操作方法 |
CN113569635B (zh) * | 2021-06-22 | 2024-07-16 | 深圳玩智商科技有限公司 | 一种手势识别方法及系统 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0612177A (ja) * | 1992-06-29 | 1994-01-21 | Canon Inc | 情報入力方法及びその装置 |
JP2004013314A (ja) * | 2002-06-04 | 2004-01-15 | Fuji Xerox Co Ltd | 位置測定用入力支援装置 |
JP2006209359A (ja) * | 2005-01-26 | 2006-08-10 | Takenaka Komuten Co Ltd | 指示動作認識装置、指示動作認識方法及び指示動作認識プログラム |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3749369B2 (ja) * | 1997-03-21 | 2006-02-22 | 株式会社竹中工務店 | ハンドポインティング装置 |
JP3795647B2 (ja) * | 1997-10-29 | 2006-07-12 | 株式会社竹中工務店 | ハンドポインティング装置 |
US6064354A (en) * | 1998-07-01 | 2000-05-16 | Deluca; Michael Joseph | Stereoscopic user interface method and apparatus |
JP2001236179A (ja) * | 2000-02-22 | 2001-08-31 | Seiko Epson Corp | 指示位置検出システムおよび方法、プレゼンテーションシステム並びに情報記憶媒体 |
US7227526B2 (en) * | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
US6911995B2 (en) * | 2001-08-17 | 2005-06-28 | Mitsubishi Electric Research Labs, Inc. | Computer vision depth segmentation using virtual surface |
JP4974319B2 (ja) * | 2001-09-10 | 2012-07-11 | 株式会社バンダイナムコゲームス | 画像生成システム、プログラム及び情報記憶媒体 |
JP4286556B2 (ja) * | 2003-02-24 | 2009-07-01 | 株式会社東芝 | 画像表示装置 |
JP2004078977A (ja) | 2003-09-19 | 2004-03-11 | Matsushita Electric Ind Co Ltd | インターフェイス装置 |
HU0401034D0 (en) * | 2004-05-24 | 2004-08-30 | Ratai Daniel | System of three dimension induting computer technology, and method of executing spatial processes |
WO2006003586A2 (en) * | 2004-06-29 | 2006-01-12 | Koninklijke Philips Electronics, N.V. | Zooming in 3-d touch interaction |
CN101308442B (zh) * | 2004-10-12 | 2012-04-04 | 日本电信电话株式会社 | 三维指示方法和三维指示装置 |
US8614676B2 (en) * | 2007-04-24 | 2013-12-24 | Kuo-Ching Chiang | User motion detection mouse for electronic device |
US20060267927A1 (en) * | 2005-05-27 | 2006-11-30 | Crenshaw James E | User interface controller method and apparatus for a handheld electronic device |
ITUD20050152A1 (it) * | 2005-09-23 | 2007-03-24 | Neuricam Spa | Dispositivo elettro-ottico per il conteggio di persone,od altro,basato su visione stereoscopica,e relativo procedimento |
US8217895B2 (en) * | 2006-04-28 | 2012-07-10 | Mtekvision Co., Ltd. | Non-contact selection device |
CN200947919Y (zh) * | 2006-08-23 | 2007-09-19 | 陈朝龙 | 辅助鼠标操作的支撑结构 |
JP4481280B2 (ja) * | 2006-08-30 | 2010-06-16 | 富士フイルム株式会社 | 画像処理装置、及び画像処理方法 |
US8354997B2 (en) * | 2006-10-31 | 2013-01-15 | Navisense | Touchless user interface for a mobile device |
KR100851977B1 (ko) * | 2006-11-20 | 2008-08-12 | 삼성전자주식회사 | 가상 평면을 이용하여 전자 기기의 사용자 인터페이스를제어하는 방법 및 장치. |
KR100827243B1 (ko) * | 2006-12-18 | 2008-05-07 | 삼성전자주식회사 | 3차원 공간상에서 정보를 입력하는 정보 입력 장치 및 그방법 |
CN101064076A (zh) * | 2007-04-25 | 2007-10-31 | 上海大学 | 远景定向查询展示装置及方法 |
US8963828B2 (en) * | 2007-06-04 | 2015-02-24 | Shimane Prefectural Government | Information inputting device, information outputting device and method |
US8166421B2 (en) * | 2008-01-14 | 2012-04-24 | Primesense Ltd. | Three-dimensional user interface |
JP4318056B1 (ja) * | 2008-06-03 | 2009-08-19 | 島根県 | 画像認識装置および操作判定方法 |
US8289316B1 (en) * | 2009-04-01 | 2012-10-16 | Perceptive Pixel Inc. | Controlling distribution of error in 2D and 3D manipulation |
US20100315413A1 (en) * | 2009-06-16 | 2010-12-16 | Microsoft Corporation | Surface Computer User Interaction |
JP4701424B2 (ja) * | 2009-08-12 | 2011-06-15 | 島根県 | 画像認識装置および操作判定方法並びにプログラム |
US8261211B2 (en) * | 2009-10-01 | 2012-09-04 | Microsoft Corporation | Monitoring pointer trajectory and modifying display interface |
US20120056989A1 (en) * | 2010-09-06 | 2012-03-08 | Shimane Prefectural Government | Image recognition apparatus, operation determining method and program |
-
2009
- 2009-08-12 JP JP2009187449A patent/JP4701424B2/ja not_active Expired - Fee Related
-
2010
- 2010-08-12 KR KR1020127001932A patent/KR101347232B1/ko active IP Right Grant
- 2010-08-12 EP EP10808086.2A patent/EP2466423B1/en not_active Not-in-force
- 2010-08-12 CA CA2768893A patent/CA2768893C/en not_active Expired - Fee Related
- 2010-08-12 US US13/384,682 patent/US8890809B2/en not_active Expired - Fee Related
- 2010-08-12 CA CA 2886208 patent/CA2886208A1/en not_active Abandoned
- 2010-08-12 CN CN201510015361.8A patent/CN104615242A/zh active Pending
- 2010-08-12 WO PCT/JP2010/005058 patent/WO2011018901A1/ja active Application Filing
- 2010-08-12 CN CN201080035693.8A patent/CN102473041B/zh not_active Expired - Fee Related
-
2014
- 2014-10-23 US US14/522,087 patent/US9535512B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0612177A (ja) * | 1992-06-29 | 1994-01-21 | Canon Inc | 情報入力方法及びその装置 |
JP2004013314A (ja) * | 2002-06-04 | 2004-01-15 | Fuji Xerox Co Ltd | 位置測定用入力支援装置 |
JP2006209359A (ja) * | 2005-01-26 | 2006-08-10 | Takenaka Komuten Co Ltd | 指示動作認識装置、指示動作認識方法及び指示動作認識プログラム |
Non-Patent Citations (1)
Title |
---|
YASUAKI NAKATSUGU: "A method for specifying cursor Position by finger pointing", ITE TECHNICAL REPORT, vol. 26, no. 8, 29 January 2002 (2002-01-29), pages 55 - 60, XP008168559 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013046768A1 (ja) * | 2011-09-30 | 2013-04-04 | 楽天株式会社 | 検索装置、検索方法、記録媒体、ならびに、プログラム |
CN104012073A (zh) * | 2011-12-16 | 2014-08-27 | 奥林巴斯映像株式会社 | 拍摄装置及其拍摄方法、存储能够由计算机来处理的追踪程序的存储介质 |
CN104012073B (zh) * | 2011-12-16 | 2017-06-09 | 奥林巴斯株式会社 | 拍摄装置及其拍摄方法、存储能够由计算机来处理的追踪程序的存储介质 |
CN107197141A (zh) * | 2011-12-16 | 2017-09-22 | 奥林巴斯株式会社 | 拍摄装置及其拍摄方法、存储能够由计算机来处理的追踪程序的存储介质 |
CN107197141B (zh) * | 2011-12-16 | 2020-11-03 | 奥林巴斯株式会社 | 拍摄装置及其拍摄方法、存储能够由计算机来处理的追踪程序的存储介质 |
EP2610714A1 (en) * | 2012-01-02 | 2013-07-03 | Alcatel Lucent International | Depth camera enabled pointing behavior |
US11036351B2 (en) | 2017-08-04 | 2021-06-15 | Sony Corporation | Information processing device and information processing method |
US20220334648A1 (en) * | 2021-04-15 | 2022-10-20 | Canon Kabushiki Kaisha | Wearable information terminal, control method thereof, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US20150130715A1 (en) | 2015-05-14 |
JP2011039844A (ja) | 2011-02-24 |
CA2886208A1 (en) | 2011-02-17 |
JP4701424B2 (ja) | 2011-06-15 |
CA2768893A1 (en) | 2011-02-17 |
KR20120040211A (ko) | 2012-04-26 |
EP2466423A4 (en) | 2015-03-11 |
CN102473041B (zh) | 2015-01-07 |
US20120119988A1 (en) | 2012-05-17 |
KR101347232B1 (ko) | 2014-01-03 |
CA2768893C (en) | 2015-11-17 |
US9535512B2 (en) | 2017-01-03 |
EP2466423B1 (en) | 2018-07-04 |
CN102473041A (zh) | 2012-05-23 |
US8890809B2 (en) | 2014-11-18 |
CN104615242A (zh) | 2015-05-13 |
EP2466423A1 (en) | 2012-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4701424B2 (ja) | 画像認識装置および操作判定方法並びにプログラム | |
JP5604739B2 (ja) | 画像認識装置および操作判定方法並びにプログラム | |
JP5167523B2 (ja) | 操作入力装置および操作判定方法並びにプログラム | |
JP4900741B2 (ja) | 画像認識装置および操作判定方法並びにプログラム | |
KR101522991B1 (ko) | 조작입력장치 및 방법, 그리고 프로그램 | |
JP5515067B2 (ja) | 操作入力装置および操作判定方法並びにプログラム | |
JP5114795B2 (ja) | 画像認識装置および操作判定方法並びにプログラム | |
JP4678428B2 (ja) | 仮想空間内位置指示装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080035693.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10808086 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13384682 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010808086 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2768893 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 20127001932 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |