EP3398028A1 - System and method for human computer interaction - Google Patents
System and method for human computer interactionInfo
- Publication number
- EP3398028A1 EP3398028A1 EP16711569.0A EP16711569A EP3398028A1 EP 3398028 A1 EP3398028 A1 EP 3398028A1 EP 16711569 A EP16711569 A EP 16711569A EP 3398028 A1 EP3398028 A1 EP 3398028A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- smart devices
- cameras
- entity
- network
- picture frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000003993 interaction Effects 0.000 title description 9
- 238000012545 processing Methods 0.000 claims abstract description 17
- 230000008859 change Effects 0.000 claims abstract description 15
- 238000004891 communication Methods 0.000 claims abstract description 5
- 230000033001 locomotion Effects 0.000 claims description 36
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 238000010079 rubber tapping Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 108010023321 Factor VII Proteins 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005309 stochastic process Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
- G06F18/295—Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Definitions
- the present invention relates to a system and method for Human Computer Interaction. More specifically, the invention relates to a system and method for detecting a change in the position of an entity relative to its surroundings.
- HCI Human Computer Interaction
- Kinect TM is running proprietary software.
- Kinect TM is capturing under any ambient light. This capability is ensured by the depth sensor's design which consists of an infrared laser projector and a monochrome CMOS sensor. Also, according to the user's position, Kinect TM can detect the user and adjust not only the range of the depth sensor but also calibrate the camera.
- Leap Motion TM Controller It is able to track movements down to a hundredth of a millimeter and able to track all 10 fingers simultaneously when pointing, i.e., gestures that involve more gross movements are avoided.
- the Leap Motion TM Controller is able to track objects which belong to a hemispherical area.
- the Leap Motion TM Controller consists of two monochromatic infrared cameras and three infrared LEDs and its design enables it to face upwards when plugged in a USB port. The LEDs generate pattern-less IR light and the cameras generate almost 300 frames per second of reflected data, which is then sent through a USB cable to the host computer.
- the host computer runs the Leap Motion TM Controller software that analyzes the data by synthesizing the 3D position data and by comparing the 2D frames generated by the two cameras the position of the object can be extracted.
- the following movement patterns are recognized by the Leap Motion TM Controller software: a circle with a single finger, a swipe with a single finger (as if tracing a line), a key tap by a finger (as if tapping a keyboard key), and a screen tap, i.e., a tapping movement by the finger as if tapping a vertical computer screen.
- the invention relates to a method for detecting a change in the position of an entity relative to its surroundings, wherein the method uses at least two smart devices, each smart device having one camera.
- the cameras of the smart devices are set up to obtain a respective picture frame of an entity present in the field of view of the cameras and the smart devices being connected to each other via a wireless network and/or a direct communication link.
- the method comprises the steps of capturing picture frames of the entity with each camera of the at least two smart devices, and processing the captured picture frames to obtain information about the change in the position of the entity.
- the method further comprises a step of setting up the cameras of the smart devices such that the respective picture frames are in a predetermined spatial relationship to each other.
- This aspect preferably comprises placing the smart devices relative to a surface, preferably in a close position to each other, more preferably a parallel position to each-other. More preferably the method comprises placing the smart devices facing downwards with the respective camera facing upwards on a stable surface in a parallel position to each other.
- the method further comprises the step of setting up the cameras of the smart devices to capture the picture frames, wherein preferably cameras have at least one of the following characteristics: a same resolution, a same rate and a same shutter speed.
- the computational effort is further reduced if the captured picture frames are captured as binocular vision picture frames, i.e., the frames having similar characteristics.
- both smart devices have identical cameras and/or the cameras are set to a same resolution, a same rate and/or a same shutter speed.
- the method further comprises a step of calibrating each of the cameras of the smart devices to establish a predetermined spatial relationship between the three dimensional world and the two dimensional image plane; and/or training a motion detection algorithm and/or motion tracking algorithm.
- at least one of the two smart devices is connected to a network and the network comprises at least one further computing system.
- the computational effort is distributed within the network, and data is transmitted within the network to complete the computational effort.
- the image capturing and computationally inexpensive steps preferably basic image processing steps, are performed in at least one of the at least two smart devices.
- the computationally expensive steps preferably processing the captured picture frames to obtain information about the entity, are performed in the further computing system.
- Data transmission is performed between the at least two smart devices directly and/or the network.
- the step of processing the captured image frames comprises at least one of the following further steps, a step of motion detection, a step of motion tracking, and/or a step of gesture recognition, preferably by using the Hidden Markov Model.
- the entity is at least one hand region preferably at least one finger and wherein step of motion tracking comprises a step of tracking at least one hand region. Further comprises a step of determining the 3D orientation of at least one hand, a step of determining the motion of at least one finger of at least one hand, and/or wherein the step of gesture recognition comprises a step of recognizing a gesture based on the tracked motion of the at least one finger.
- the computational effort is implemented as a framework which is located at an abstraction layer close to the operating system kernel layer of the performing computing system.
- at least one of the following gestures is recognized: a swipe, a circle, a tap; and/or a hand orientation is recognized, preferably a roll, a pitch and a yaw.
- a system for detecting a change in the position of an entity relative to its surroundings comprises at least two smart devices, each smart device having one camera, the cameras of the smart devices being set up to obtain a respective picture frame of an entity present in the field of view of the cameras and the smart devices are configured to be connected to each other via a wireless network and/or a direct communication link; and the system is configured to process the captured picture frames to obtain information about the change in the position of the entity.
- the cameras of the smart devices are set up such that the respective picture frames are in a predetermined spatial relationship to each other; preferably the smart devices are configured to be placed relative to a surface preferably in a close position to each other, more preferably a parallel position to each-other; and more preferably the smart devices are configured to be placed facing downwards with the respective camera facing upwards on a stable surface in a parallel position to each other.
- the cameras of the smart devices are further set up to capture the picture frames, preferably binocular vision picture frames, wherein preferably the cameras have at least one of the following characteristics: a same resolution, a same rate and a same shutter speed.
- the cameras of the smart devices are configured to be calibrated to establish a predetermined spatial relationship between the three dimensional world and the two dimensional image plane; and/or to train a motion detection algorithm and/or to perform a motion tracking algorithm.
- At least one of the two smart devices is configured to be connected to a network and the network comprises at least one further computing system.
- the computational effort is distributed within the network, and data is transmitted within the network to complete the computational effort
- at least one of the at least two smart devices is configured to perform image capturing and computationally inexpensive steps, preferably basic image processing steps
- the further computing system is configured to perform computationally expensive steps, preferably processing the captured picture frames to obtain information about the entity
- the at least two smart devices and the further computing system are configured to perform data transmission between the at least two smart devices directly and/or between the at least two smart devices and the further computing system.
- the performing computing system is configured to carry out the computational effort implemented as a framework which is located at an abstraction layer close to the operating system kernel layer.
- the smart devices and/or the further computing device is configured to perform the method according to any one of the preceding aspects of the invention.
- the invention lets its users experience the ability to control a computer or interact with augmented reality applications in a three-dimensional space by making touch-free gestures.
- the aforementioned method may be implemented as software provided by a server which can be either a computer or the cloud.
- a hand gesture recognition system is a key element used for HCI, as using hand gestures provides an attractive alternative to the cumbersome interface devices for HCI.
- portable devices such as cellular phones have become ubiquitous.
- the present invention provides an inexpensive solution of a gesture control mechanism that is also portable and wireless.
- the term smart device refers to an electronic device that has preferably one or more of the following properties: enabled to connect to at least one network with at least one network connecting means, wireless, mobile, and has at least one camera.
- the network connecting means can be one of the following, a NFC connecting means, a wireless LAN connecting means, a LAN connecting means, a mobile data connecting means, and a Bluetooth connecting means.
- the smart devices can be one of the following: A mobile phone, a smart watch, a smart camera, and a tablet device. None of the above presented lists for the network connecting means, the sensors and the smart devices is exclusive and limiting the scope of the invention. Furthermore it is noted that the terms mobile device and smart device are interchangeably used herein.
- entity encloses both a rigid entity, i.e., an object that has a more or less fixed shape and may or may not change its orientation and/or position in a 3D space, and a flexible entity, i.e., an object that may or may not change its shape and may or may not change its orientation and/or position in a 3D space.
- the present invention has the advantage that depth sensing can be performed without using lasers or infrared sensors. In this way, the invention is based on the idea of binocular vision, which allows a pair of eyes to be used together in harmony.
- the two picture frames For detecting a change in the position of an entity relative to its surroundings the two picture frames preferably contain roughly the same visual information. This is preferably achieved by positioning the two cameras close to each other and by pointing them into a similar direction. When the 3 dimensional relative positions of the two cameras and their respective line of sight is known, the 3D position data for entities within the acquired picture frames can be calculated. In other words, if the two cameras are set up in a predetermined spatial relationship the picture frames can be used to calculate 3D position data for entities in the acquired picture frames. A parallel positioning or a quasi-parallel positioning is preferred since the computational effort is reduced compared to an arbitrary orientation of the cameras. A stable surface may help to arrange the smart devices. In addition or alternatively a reference surface may help to arrange the smart devices.
- a software using data from at least one sensor of the smart device may help to arrange the smart devices.
- the cameras are arranged side by side the stereo and vision principle the following relation is given. For every inch the cameras are apart the subject of the 3D position to be calculated for has to be 30 inches away. Following this relation the viewer is able to fuse the images together. This is typically referred to as the" 1/30 Rule".
- the framework is preferably located on a layer directly above the operating system's kernel and below the application framework layer.
- the framework for the respective computational effort will preferably be automated to ensure protection from any kind of attempt that is threatening the user's privacy. More specifically, the system may be protected in order to avoid malicious actions intending to invade the user's privacy and manipulate personal data. However, exact location of this mechanism depends on the architecture of the respective operating system.
- the method is implemented as a program which is located at a layer near the operating system layer or the kernel, which is not accessible by malicious applications programmed by application developers.
- the mentioned layer should be interpreted as an abstraction layer.
- abstraction layers typically separate different functional units of the operating system. Although a fine grained detail of these abstraction layers may differ between operating systems, on higher level these operating systems are preferably divided typically into the kernel/hardware layer, the library layer, the service layer and the application layer.
- the available amount of energy is limited, in particular if the smart device runs on a battery.
- the method preferably works in an idle state if no hand is detected in the tracking space and no state signal is generated, such that the tracking step is not performed.
- the term "mechanism" or "framework” can relate to a set of methods and a system.
- a tracking module, a human detection module, a gesture module, and a control module are preferably located on a layer directly above the operating system's kernel and below the application framework layer.
- Figure 1 schematically shows the real-time gesture recognition system comprising two smartphones
- Figure 2 schematically shows the triangulation method that is used for depth sensing
- Figure 3 schematically shows a flowchart of the method for hand position and orientation acquisition
- FIG. 4 schematically shows a flowchart of the gesture recognition method. Detailed Description of the Invention
- an in-device camera framework for a smart phone is proposed, while in another aspect of the invention another framework is designed to detect the hands of the user's body extracted by the images taken by the on-device camera. Moreover, based on the results from detection, the use of a robust tracking algorithm is preferable, as any error in tracking will prevent the flawless interaction with the computer. After these steps, the framework provides the results of detection and tracking as an input to the desired software application
- the embodiments are explained in a way that the system resources are available on local desktop computers with fast processors and adequate memory.
- the present invention may at least partially be implemented in a cloud and/or may be also capable of working with the cloud.
- the session can be any type of software that takes input from a user.
- the camera which is embedded on the mobile device, can provide data to the computer for analysis and processing.
- Figure 1 schematically shows a real- time gesture recognition system comprising a space where a pair of identical mobile devices 10a, 10b is used in accordance with one embodiment of the present invention.
- the space preferably comprises a flat surface 20 where the pair of the mobile devices are set side by side and facing downwards.
- the system comprises a computer 30 that serves as further computing system and may also be the unit that is being controlled by the mobile devices 10a, 10b. Further shown are the left hand 40a and the right hand 40b of a user.
- two cameras 11a, 1 lb of the same model are used. However, any two cameras may alternatively be used.
- the two cameras can record at the same resolution, same rate and shutter speed.
- the mobile devices according to one embodiment of the invention form an array of smart device cameras for hand gesture recognition.
- any number of devices can alternatively be used to carry out the invention.
- other embodiments comprise 3, 4, 5, 6, 7, 8, or any other number of mobile devices.
- the number of cameras existing and/or used per device is also not limited to one.
- each mobile device may also comprise one or more cameras, which may or may not be used to carry out the invention.
- a device comprising a front and a rear camera may be used in a way that only the rear or the front camera is used to carry out the invention.
- Figure 2 shows the deployment of the cameras based on the theory of binocular vision.
- Triangulation is any kind of distance calculation based on given lengths and angles, using trigonometric relations. Triangulation with two parallel cameras, or stereo triangulation may be performed based on parameters like, for example, distance between the cameras, focal length of the cameras, spatial angles of the line of sight from the imaged entity to each camera and/or other suitable parameters known to the skilled person.
- two cameras 10a, 10b preferably with the same focal length are placed preferably parallel or quasi parallel to each other.
- X is the X-axis of both cameras
- Z is the optical axis of both cameras
- k is the focal length
- d is the distance between the two cameras 10a, 10b
- O is the entity captured
- R is the projection of the real-world entity O in an image acquired by the right camera 10b
- L is the projection of the real-world point O in an image acquired by the left camera 10a.
- both cameras 10a, 10b are separated by d, both cameras view the same entity O in a different location on the two-dimensional captured images.
- the distance between the two projected points R and L is called disparity and is used to calculate depth information, which is the distance between real-world entity O and the stereo vision system.
- Depth imaging acquired by triangulation may not be ideal when real-time and continuous imaging is required. In this case a different method is employed to generate the depth data.
- the cameras are calibrated before the beginning of the method of Human Computer Interaction. The purpose of calibration is to establish a relation between the three dimensional world (that is corresponding to a real- world coordinate system) and the two dimensional plane (that is corresponding to the image plane of the respective camera).
- Figure 3 shows the steps performed in one embodiment of the invention.
- a hand region may be tracked.
- An image region that correlates with a hand is extracted in each of the captured frames.
- the background is complex and dynamically changing, while illumination may be a stable factor, at least when the invention is carried out in an indoor environment.
- the hands In order to track the hands of the user, the hands have to be separated from the background. Since the functions of tracking, recognition, and 3D processing require advance algorithms, sufficient computational power is needed. Thus the implementation of computational offloading is a major advantage.
- at least part of the method is executed on a computing system configured accordingly, preferably a further computing system, e.g., a laptop or a desktop computer (c.f. 30 in Fig 1).
- the images taken by both cameras 1 1 1 a, 1 lb can be captured in step SI as YUV color images that are then converted in step S2 to HSV images.
- a filter preferably a median filter is applied in step S3 for minimizing effects of image noise.
- the human skin is differentiated by its background where saturation values are high and hue values are close to these of the skin.
- a detection of the hand is performed.
- the 3D position of the hand is obtained in step S6.
- the 3D orientation of the hand is determined.
- the method of triangulation can further identify the roll of the hand, the pitch and yaw angles of the fingers. To estimate these angles, preferably three more parameters are determined, which are preferably the tip of a hand region, and both right and left points of the region and also the center of gravity of a hand region. By having the center of gravity and the tip point, the direction of the hand in 3D space reveals the roll and pitch of the hand. In a similar way the left and right points determine whether a yaw takes place .
- Figure 4 shows the gesture recognition method according to one embodiment of the invention.
- the Hidden Markov Model is used to identify those gestures.
- the possible moving patterns are the following four the circle, the swipe, the key tap and the tap which can be visualized as a single finger tracing a circle, a long linear movement of a finger, a tapping movement by a finger as if tapping a keyboard key, and a tapping movement by the finger as if tapping a vertical computer screen.
- neural networks are preferably used for identification
- the Hidden Markov Model is preferred.
- the Hidden Markov Model is a doubly stochastic process with an underlying process of transitions between hidden states of the system and a process of emitting observable outputs.
- the method described in this embodiment of the invention may have to undergo an initialization process, where the HMM is trained and tested.
- the Baum- Welch and Viterbi algorithms are used as they are known for their well-known computational savings.
- the initialization of the HMM parameters may be done by setting the initial probability for the first state of the initial state to 1 and the transition probability distribution for each state is uniformly distributed.
- the preprocessed images are given as input to the HMM as sequences of quantized vectors.
- a recognizer based on HMM interprets those sequences, which are directional codewords who characterize the trajectory of the motion. In order a particular sequence to be correlated with a gesture, its likelihood should exceed that of other models. As the method is used for the tracking of dynamic gestures, setting a static threshold value is impractical, since recognition likelihoods of gestures varying in size and complexity vary significantly.
- C. Keskin, O. Aran and L. Akarun "Real Time Gestural Interface for Generic Applications", Computer Engineering Dept. of Bogazici University have constructed an adaptive threshold model by connecting the states of all models. If the likelihood of a model that is calculated in each sequence exceeds that of the threshold model for one of these sequences, then the result is the recognition of the gesture. Otherwise classification is rejected.
- the training step preferably takes place only once and before the initialization of the process, because it needs substantial time to be executed and also needs the vast resources of the computing system. For this, a data set that consists of images may be given as input.
- the mobile devices are interconnected in order to function.
- the mobile devices and further computing system are connected to the same network and/or directly to each other. Once the respective mobile device cameras begin capturing frames, the frames are sent via the network and/or direct connection to the further computing system where the image processing, tracking and gesture recognition take place.
- the software that resides in the computer and is responsible for the above functionality preferably stores the IP addresses of the mobile devices temporarily, that may be resolved by the software when and while the user is interacting with an application and/or software. Once the connection is established, the data transfer begins and the mobile devices send the frames to the further computing system for further processing. The output of the software is then given as input to the favorable application or software that the user is interacting with.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Social Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2016/055707 WO2017157433A1 (en) | 2016-03-16 | 2016-03-16 | System and method for human computer interaction |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3398028A1 true EP3398028A1 (en) | 2018-11-07 |
Family
ID=55589818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16711569.0A Ceased EP3398028A1 (en) | 2016-03-16 | 2016-03-16 | System and method for human computer interaction |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3398028A1 (en) |
WO (1) | WO2017157433A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140192206A1 (en) * | 2013-01-07 | 2014-07-10 | Leap Motion, Inc. | Power consumption in motion-capture systems |
US20150097768A1 (en) * | 2013-10-03 | 2015-04-09 | Leap Motion, Inc. | Enhanced field of view to augment three-dimensional (3d) sensory space for free-space gesture interpretation |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130070034A (en) * | 2011-12-19 | 2013-06-27 | 주식회사 엔씨소프트 | Apparatus and method of taking stereoscopic picture using smartphones |
US9552073B2 (en) * | 2013-12-05 | 2017-01-24 | Pixart Imaging Inc. | Electronic device |
-
2016
- 2016-03-16 WO PCT/EP2016/055707 patent/WO2017157433A1/en active Application Filing
- 2016-03-16 EP EP16711569.0A patent/EP3398028A1/en not_active Ceased
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140192206A1 (en) * | 2013-01-07 | 2014-07-10 | Leap Motion, Inc. | Power consumption in motion-capture systems |
US20150097768A1 (en) * | 2013-10-03 | 2015-04-09 | Leap Motion, Inc. | Enhanced field of view to augment three-dimensional (3d) sensory space for free-space gesture interpretation |
Non-Patent Citations (1)
Title |
---|
See also references of WO2017157433A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO2017157433A1 (en) | 2017-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11868543B1 (en) | Gesture keyboard method and apparatus | |
US10732725B2 (en) | Method and apparatus of interactive display based on gesture recognition | |
US20210350631A1 (en) | Wearable augmented reality devices with object detection and tracking | |
US10007349B2 (en) | Multiple sensor gesture recognition | |
JP6090140B2 (en) | Information processing apparatus, information processing method, and program | |
EP3218781B1 (en) | Spatial interaction in augmented reality | |
US9600078B2 (en) | Method and system enabling natural user interface gestures with an electronic system | |
US8660362B2 (en) | Combined depth filtering and super resolution | |
US9491441B2 (en) | Method to extend laser depth map range | |
US10254847B2 (en) | Device interaction with spatially aware gestures | |
US20130141327A1 (en) | Gesture input method and system | |
US9081418B1 (en) | Obtaining input from a virtual user interface | |
US10607069B2 (en) | Determining a pointing vector for gestures performed before a depth camera | |
US11886643B2 (en) | Information processing apparatus and information processing method | |
WO2023173668A1 (en) | Input recognition method in virtual scene, device and storage medium | |
CN105306819B (en) | A kind of method and device taken pictures based on gesture control | |
US20150185851A1 (en) | Device Interaction with Self-Referential Gestures | |
KR20110136017A (en) | Augmented reality device to display hologram object | |
US12093461B2 (en) | Measurement based on point selection | |
EP3398028A1 (en) | System and method for human computer interaction | |
Yang et al. | Real time 3D hand tracking for 3D modelling applications | |
Mustafa et al. | Multi Finger Gesture Recognition and Classification in Dynamic Environment under Varying Illumination upon Arbitrary Background |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20180802 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20190925 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20220319 |