US20190107894A1 - System and method for deep learning based hand gesture recognition in first person view - Google Patents
System and method for deep learning based hand gesture recognition in first person view Download PDFInfo
- Publication number
- US20190107894A1 US20190107894A1 US16/020,245 US201816020245A US2019107894A1 US 20190107894 A1 US20190107894 A1 US 20190107894A1 US 201816020245 A US201816020245 A US 201816020245A US 2019107894 A1 US2019107894 A1 US 2019107894A1
- Authority
- US
- United States
- Prior art keywords
- hand
- gesture
- user
- key
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G06K9/00355—
-
- G06K9/6256—
-
- G06K9/6267—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Definitions
- This disclosure relates generally to detection of hand gestures, and more particularly to a system and method for detecting interaction of three dimensional dynamic hand gestures with frugal augmented reality (AR) devices such as head-mount devices.
- AR augmented reality
- AR devices have been exceedingly popular in recent years.
- the user interaction modalities used in such devices point to the fact that hand gestures form an intuitive means of interaction in ARNR (virtual reality) applications.
- ARNR virtual reality
- These devices use a variety of on-board sensors and customized processing chips which often ties the technology to complex and expensive hardware. These devices are tailor made to perform a specific function and are mostly readily unavailable due to their exorbitant prices.
- a method for hand-gesture recognition includes receiving, via one or more hardware processors, a plurality of frames of a media stream of a scene captured from a first person view (FPV) of a user using at least one RGB sensor communicably coupled to a wearable AR device.
- the media stream includes RGB image data associated with the plurality of frames of the scene.
- the scene comprises a dynamic hand gesture performed by the user.
- the method includes estimating, via the one or more hardware processors, a temporal information associated with the dynamic hand gesture from the RGB image data by using a deep learning model.
- the estimated temporal information is associated with hand poses of the user and comprises a plurality of key-points identified on user's hand in the plurality of frames. Further, the method includes classifying, by using a multi-layered Long Short Term memory (LSTM) classification network, the dynamic hand gesture into at least one predefined gesture class based on the temporal information of the key points, via the one or more hardware processors.
- LSTM Long Short Term memory
- a system for gesture recognition includes one or more memories; and one or more hardware processors, the one or more memories coupled to the one or more hardware processors, wherein the at least one processor is capable of executing programmed instructions stored in the one or more memories to receive a plurality of frames of a media stream of a scene captured from a first person view (FPV) of a user using at least one RGB sensor communicably coupled to a wearable AR device.
- the media stream includes RGB image data associated with the plurality of frames of the scene.
- the scene includes a dynamic hand gesture performed by the user.
- the one or more hardware processors are further configured by the instructions to estimate a temporal information associated with the dynamic hand gesture from the RGB image data by using a deep learning model.
- the estimated temporal information is associated with hand poses of the user and includes a plurality of key-points identified on user's hand in the plurality of frames.
- the one or more hardware processors are further configured by the instructions to classify, by using a multi-layered LSTM classification network, the dynamic hand gesture into at least one predefined gesture class based on the temporal information of the key points.
- a non-transitory computer-readable medium having embodied thereon a computer program for executing a method for gesture recognition includes receiving a plurality of frames of a media stream of a scene captured from a first person view (FPV) of a user using at least one RGB sensor communicably coupled to a wearable AR device.
- the media stream includes RGB image data associated with the plurality of frames of the scene.
- the scene comprises a dynamic hand gesture performed by the user.
- the method includes estimating a temporal information associated with the dynamic hand gesture from the RGB image data by using a deep learning model.
- the estimated temporal information is associated with hand poses of the user and comprises a plurality of key-points identified on user's hand in the plurality of frames.
- the method includes classifying, by using a multi-layered LSTM classification network, the dynamic hand gesture into at least one predefined gesture class based on the temporal information of the key points.
- FIGS. 1A-1D illustrates various examples of dynamic hand gestures according to some embodiments of the present disclosure.
- FIG. 2 illustrates an example system architecture for gesture recognition using deep learning according to some embodiments of the present disclosure.
- FIG. 3 illustrates a network implementation of system for gesture recognition using deep learning according to some embodiments of the present disclosure.
- FIG. 4 illustrates a representative process flow for gesture recognition using deep learning according to some embodiments of the present disclosure.
- FIG. 5 illustrates a process flow for estimating temporal information associated with the dynamic hand gesture according to some embodiments of the present disclosure.
- FIG. 6 illustrates an example multi-layer LSTM network for gesture classification according to some embodiments of the present disclosure.
- FIG. 7 illustrates a plurality of key-points detected by hand pose detection module as overlay on input images according to some embodiments of the present disclosure.
- FIG. 8 is a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
- Augmented reality refers to representation of a view of physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, text, graphics, or video. AR is useful in various applications such as medical, education, entertainment, military, and so on. Wearable AR/VR devices such as the Microsoft HalolensTM, Daqri Smart HelmetTM, Meta GlassesTM have been exceedingly popular in recent years.
- the user interaction modalities used in such devices point to the fact that hand gestures form an intuitive means of interaction in AR/VR applications.
- These devices use a variety of on-board sensors and customized processing chips which often ties the technology to complex and expensive hardware. These devices are tailor made to perform a specific function and are not readily available due to their exorbitant prices.
- Generic platforms such as the Microsoft KinectTM and Leap MotionTM Controller provide the much needed abstraction but fare poorly in direct sunlight, incandescent light and outdoor environments due to the presence of infrared radiation and in the presence of reflective surfaces such as a thick glass and under water.
- these devices can also be extended for AR applications.
- the main motive of using these frugal headsets Google Cardboard or Wearality with an Android smartphone was their economic viability, portability and easy scalability to the mass market.
- accessibility of sensors to these head mounted devices is limited to the sensors available on the attached smartphone.
- Current versions use a magnetic trigger or a conductive lever to trigger a single event hence curtailing the richness of possible user interaction.
- Various embodiments disclosed herein provides methods and system provide technical solution to the above mentioned technical problems in gesture detection, particularly dynamic hand gesture detection, using deep learning approach.
- deep learning approach computer vision models can be built that are robust to intra class variations and often surpass human abilities in performing detection and classification tasks.
- a system for detecting and classifying complex hand gestures such as Bloom, Click, Zoom-In, Zoom-Out in FPV for AR applications involving single RGB camera input with-out having built-in depth sensors, is presented.
- the aforementioned hand gestures are presented in FIGS. 1A-1D for the ease of understanding.
- the disclosed method and system overcomes the limitations with existing techniques and opens avenues for rich user-interaction on frugal devices.
- FIGS. 1A-1D various dynamic hand gestures are illustrated.
- FIG. 1A illustrates a ‘Bloom’ dynamic hand gesture
- FIG. 1B illustrates various stages of a ‘click’ dynamic hand gesture
- FIG. 1C illustrates various stages of ‘Zoom-in’ dynamic hand gesture
- FIG. 1D illustrates various stages of ‘Zoom-Out’ dynamic hand gesture.
- the term ‘dynamic’ 3D hand gesture refers to a hand gesture which is not static but required dynamic motion.
- the dynamic hand gestures considered herein such as Bloom, Click, Zoom-in, and Zoom-out are each shown to include multiple stages. For instance, the hand-gesture bloom illustrated in FIG.
- FIG. 1A is performed by following stages 110 followed by stage 112 , which is further followed by stage 114 .
- the bloom hand gesture can be performed for performing a predefined task, for example, a menu display operation.
- FIG. 1B illustrates multiple stages of hand movement to execute/perform click hand gesture, including stage 120 followed by stage 122 , which is further followed by stage 124 .
- the click hand gesture can be performed for performing a predefined task, such as a select/hold operation.
- FIG. 1C illustrates multiple stages of hand movement to execute Zoom-in hand gesture, including stage 130 followed by stage 132 , which is further followed by stage 134 ,
- the Zoom-in hand gesture can be performed zooming into a display, for example that of a scene.
- stage 140 of hand movement is followed by hand movement in stage 142 which is finally followed by stage 144 .
- the zoom-out hand gesture can be performed, for instance for performing a predefined task such as zooming-out of a scene being displayed.
- the aforementioned hand gestures are presented for exemplary purposes and are not intended to limit the embodiments disclosed herein.
- Various distinct applications and devices can utilize distinct hand gestures to perform various functionalities by utilizing the computations described herewith in various embodiments.
- the dynamic hand gesture may correspond to one of a 2D hand gesture and a 3D hand gesture.
- the embodiments disclosed herein presents method and system for detecting complex dynamic hand gestures such as those described and depicted in FIGS. 1A-1D in First Person View (FPV) for AR applications involving single RGB camera.
- the system uses RGB image data received from the single RGB camera as the input, without requiring any depth information, thereby precluding the need of additional sophisticated depth sensors and overcoming the limitations of existing techniques.
- a high level example system architecture for gesture detection in accordance with various embodiments of the present disclosure is presented here with reference to FIG. 2 .
- ROI marking using head-mount devices The manner, in which the system and method for region of interest (ROI) marking using head-mount devices shall be implemented, has been explained in details with respect to the FIGS. 1 through 5 . While aspects of described methods and systems for ROI marking using head-mount devices can be implemented in any number of different systems, utility environments, and/or configurations, the embodiments are described in the context of the following exemplary system(s).
- the system architecture is shown to include a device for capturing a media stream in FPV of a user.
- the disclosed device 202 may include a (1) single RGB camera, for example, installed in a mobile communication device such as a smart phone, and (2) an AR wearable for example a head mounted AR device.
- Example of such an AR wearable may include Google cardboard.
- the media stream captured by the RGB camera in user's FPV (being facilitated by the AR wearable) is sent to a system 204 for gesture detection.
- the system may be embodied in a remote server.
- the media stream may be downscaled prior to sending the same to the remote server.
- the system 204 is adapted to classify the performed gesture in the media stream in order to recognize the gesture. Upon recognition of the gesture, the system 204 communicates the result back to the mobile communication device.
- the system is adapted to receive a media stream having a dynamic hand gesture being performed for executing a predefined task, wherein the media stream is captured in user's FPV.
- Various hand gestures and the corresponding predefined tasks have been described with reference to FIGS. 1A-1D
- the system 302 is capable of detecting the dynamic hand gesture.
- the detection of dynamic hand gesture includes detecting a presence of a stable hand in a hand pose, followed by motion of hand in particular manner so as to execute the predefined task.
- system 302 is implemented for gesture detection via head-mount devices, it may be understood that the system 302 may is not restricted to any particular machine or environment.
- the system 302 can be utilized for a variety of domains where detection of gesture for execution of a task is to be determined.
- the system 302 may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like.
- the system 302 may capture the media stream, for example, videos and/or images via multiple devices and/or machines 304 - 1 , 304 - 2 . . . 304 -N, collectively referred to as devices 304 hereinafter.
- Each of the devices includes least one RGB sensor communicably coupled to a wearable AR device.
- the RGB sensors may be embodied in media capturing device having such as a handheld electronic device, a mobile phone, a smartphone, a portable computer, a PDA, and so on.
- the device may embody a VR camera in addition to the RGB sensor.
- the device embodying the RGB sensor may be communicably coupled to a wearable AR device to allow capturing of the media stream in a FPV of a user holding the media capturing device and wearing the wearable AR device.
- the AR devices are the devices that may embody AR technologies. AR technologies enhance user's perception and help the user to see, hear, and feel the environments in enriched ways.
- the devices 304 are communicatively coupled to the system 302 through a network 306 , and may be capable of transmitting the captured media stream to the system 302 .
- the network 306 may be a wireless network, a wired network or a combination thereof.
- the network 306 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like.
- the network 306 may either be a dedicated network or a shared network.
- the shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another.
- the network 306 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
- the system 302 may be embodied in a computing device 310 .
- the computing device 310 may include, but are not limited to, a desktop personal computer (PC), a notebook, a laptop, a portable computer, a smart phone, a tablet, and the like.
- the system 302 may also be associated with a data repository 312 to store the media stream. Additionally or alternatively, the data repository 312 may be configured to store data and/or information generated during gesture recognition in the media stream.
- the repository 312 may be configured outside and communicably coupled to the computing device 310 embodying the system 302 . Alternatively, the data repository 312 may be configured within the system 302 .
- An example implementation of the system 302 for gesture recognition in the media stream is described further with reference to FIG. 4 .
- FIG. 4 illustrates a flow diagram of a method 400 for hand-gesture recognition, according to some embodiments of the present disclosure.
- the method 400 may be described in the general context of computer executable instructions.
- computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types.
- the method 400 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communication network.
- the order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 400 , or an alternative method.
- the method 200 can be implemented in any suitable hardware, software, firmware, or combination thereof.
- the method 400 depicted in the flow chart may be executed by a system, for example, the system 302 of FIG. 3 .
- the system 302 may be embodied in an exemplary computer system, for example computer system 801 ( FIG. 7 ).
- the method 400 of FIG. 4 will be explained in more detail below with reference to FIGS. 4-7 .
- the method 400 is initiated when at 402 , a user captures a media stream by means of a RGB sensor communicably coupled to a wearable AR device 404 .
- a device 406 embodying the RGB sensor may include, but is not limited to a smartphone, a PDA, a portable computer and so on.
- the wearable AR device 404 may include hardware and software that may be collectively configured to host an AR application for performing AR related functions.
- the device 406 incorporating the RGB sensor along with the device running the AR application (or the wearable AR device 404 ) may hereinafter be collectively referred to as a device 408 .
- the device 408 captures a media stream of dynamic gestures, for example, gestures as described in FIGS. 1A-1D , performed by the user in the FPV.
- the gesture may include a dynamic hand gesture.
- the dynamic hand gesture may be one of a 2D and 3D hand gesture.
- the frames of the media stream captured in FPV are streamed for processing to the gesture recognition system (for example, the system 302 of FIG. 3 ), at 410 .
- the frames obtained from the device 408 are first down-scaled, for example to a resolution of, for example, 320 ⁇ 240, to achieve real-time performance by reducing the computational time without compromising on quality.
- the device 408 streams the frames to the gesture recognition system, for example at 25 FPS.
- the gesture recognition system receives a plurality of frames of the media stream.
- the frames are the RGB frames acquired from the device 408 .
- the RGB frames include RGB image data associated with the plurality of frames of the scene.
- the RGB image data refers to data corresponding Red, Green and Blue colors associated with the frames.
- a temporal information associated with the dynamic hand gesture is estimated from the RGB image data by using a deep learning model.
- the gesture recognition system estimated the temporal information associated with the dynamic hand gesture.
- the estimated temporal information is associated with hand poses of the user and includes a plurality of key-points identified on user's hand in the plurality of frames.
- Various hand poses (or stages of dynamic hand gestures) of a user while performing the dynamic hand gestures are described with reference to FIGS. 1A-1D . A detailed explanation of estimation of the temporal information is described further with reference to FIG. 5 .
- a process flow for estimating temporal information associated with the dynamic hand gesture is illustrated.
- the estimation of the temporal information is performed by a hand pose estimation module 502 .
- the hand pose estimation module 502 facilitates in estimating the temporal information is based on a deep learning approach that estimates 3D hand pose from a single RGB image, thereby overcoming the challenges caused due to unavailability of depth information in conventional systems.
- a deep learning network utilized RGB image data to estimate the temporal information.
- the temporal information includes a plurality of key-points on hand present in the user's field of view (FoV) in the frames.
- the plurality of key-points includes 21 hand key-points comprises 4 key points per finger and one key-point close to wrist of the user's hand.
- the gesture recognition system detects the plurality of key-points and learns/estimates a plurality of network-implicit 3D articulation prior having the plurality of key points of sample user's hands from sample RGB images using the deep learning network.
- the plurality of network-implicit 3D articulation priors includes a plurality of key-points determined from a plurality of training sample RGB images of user's hand.
- the hand pose estimation module 502 Based on the plurality of network-implicit 3D articulation priors, the hand pose estimation module 502 detects the plurality of key-points on the user's hand in the plurality of frames (or RGB images), A detailed process flow for detecting the key-points on the user's hand in the RGB images is illustrated in FIG. 5 .
- the RGB images such as images 130 , 132 , 134 are received at the gesture recognition system at 502 .
- the gesture recognition system may include the hand pose estimation module 502 for estimating temporal information associated with the gesture dynamic hand gesture.
- the hand pose estimation module 502 estimates the temporal information with the help of deep learning networks including, but not limited to HandSegNet network, PoseNet network and PosePrior network, as described below:
- the HandSegNet network (marked as 508 ): The HandSegNet network is a segmentation network to localize hand within the image/frame.
- PoseNet (marked as 510 ): Given segmented hand mask as the input, the PoseNet localizes 21 hand key-points by estimating 2-dimensional scoremaps for each key-point, containing likelihood information about its spatial location.
- PosePrior (marked as 512 ): PosePrior network estimates the most likely 3D hand structure conditioned on the score maps obtained from PoseNet.
- the aforementioned deep learning networks may be pre-trained in order to estimate the plurality of key-points.
- the plurality of key-points may include 21 key-points of user's hand.
- These networks may be trained using a large-scale 3D hand pose dataset having a plurality of training samples RGB images based on synthetic hand models.
- the dataset may include a huge data set of photo-realistic renderings of different subjects performing multiple unique actions.
- videos of all the user's hands present in the dataset may be lie in an optimum range, for example 40 cm to 65 cm from the camera center which is ideal for FPV use-cases.
- the light position and intensities may be randomized and the images may be saved using a lossy JPEG compression with losses of up to 40%.
- the background may be selected chosen at random from various images and the camera location may be chosen randomly in spherical vicinity around the hand for each frame ensuring the robustness of the model to external factors.
- the hand pose estimation module 502 detects the plurality of key-points on the user's hand in the plurality of frames based on the plurality of network-implicit 3D articulation priors.
- the 21 key-points detected by the network are shown as an overlay at 514 on the input video frames 516 (for example, video frames 518 , 520 , 522 ) in FIG. 5 .
- the hand pose estimation module outputs coordinate values for each of the 21 key-points (also referred to as temporal information) detected on the user's hand.
- the temporal information is input to a gesture classification network.
- the gesture classification network includes an LSTM network.
- the LSTM network classifies the dynamic hand gesture into at least one predefined gesture class based on the key-points, as is explained further with reference to FIGS. 4 and 6 .
- the dynamic gesture is classified into at least one predefined gesture class based on the temporal information of the key points by using a multi-layered LSTM classification network.
- the multi-layered LSTM network includes a first layer, a second layer and a third layer.
- the first layer includes a LSTM layer consisting of a plurality of LSTM cells to learn long-term dependencies and patterns in 3D coordinates sequence of 21 key-points detected on the user's hand.
- the second layer includes a flattening layer that makes the temporal data one-dimensional
- the third layer includes a fully connected layer with output scores that correspond to each of the 3D dynamic hand gestures.
- the output scores are indicative of posterior probability corresponding to the each of the dynamic hand gestures for classification in the at least one predefined gesture class. For example, in the present embodiment if the system is trained for classification of dynamic hand gestures into four classes (for instance, the dynamic hand gestures defined in FIGS. 1A to 1D ), then there would be four output scores determined by the third layer. In alternate embodiments, the number of output scores can vary depending on the number of the gestures classes.
- the ability and efficiency of LSTM neural networks in learning long-term dependencies of sequential data facilitates the LSTM network based architecture for the task of gesture classification using spatial location of hand key-points in video frames.
- the LSTM network 600 is shown to include three layers, namely, a first layer 602 including a LSTM layer, a second layer 604 including a flattening layer, and a third layer 606 including a fully connected layer.
- Each gesture input is sampled into 100 frames spread evenly across the duration for feeding into the LSTM network 600 , making the input of size 63 ⁇ 100 (3 coordinate values for each of the 21 key-points) to the LSTM layer 602 , as illustrated in FIG. 6 .
- the LSTM layer 602 consisting of 200 LSTM cells tries to learn long-term dependencies and patterns in the sequence of coordinates during network training.
- the LSTM layer 602 is followed by the flattening layer 604 that makes the data one-dimensional.
- the flattening layer 604 is then followed by the fully connected layer 606 with 4 output scores that correspond to each of the 4 gestures.
- the LSTM model may be trained for classifying the dynamic hand gesture from amongst the plurality of dynamic hand gestures by using a softmax activation function.
- the gesture classification module interprets, by using a softmax activation function, output scores as un-normalized log probabilities and squashing the output scores to be between 0 and 1 using the following equation:
- K denotes number of classes
- s is a K ⁇ 1 vector of scores
- j is an index varying from 0 to K ⁇ 1
- ⁇ (s) is K ⁇ 1 output vector denoting the posterior probabilities associated with each gesture
- the LSTM network is trained for classifying the dynamic gesture into one of the gesture class.
- training the LSTM network includes computing cross-entropy loss Li of i th training sample of the batch by using following equation:
- h is a 1 ⁇ K vector denoting one-hot label of the input; and Further, a mean of Li is computed over training examples of the batch and is propagated back in the LSTM network to fine tune the LSTM model in training.
- the gesture recognition system upon classifying the dynamic gesture into the at least one predefined gesture class at 416 , the gesture recognition system communicates the classified at least one predefined gesture class to the device 408 , thereby enabling the device 408 to trigger a pre-defined task in the AR application.
- the embodiments herein utilize a data set of Bloom, Click, Zoom-In, Zoom-Out dynamic hand gestures captured in egocentric view.
- the data set includes 480 videos which include 100 videos per gesture in train set and 20 video per gesture in test set.
- the data videos in the dataset are high quality videos are captured at a resolution of 320 ⁇ 240, and at 30 FPS.
- Six users with varying skin colors were involved in the data collection, in the ages ranging from 21 to 55.
- the videos were recorded in different places (outdoor, indoor, living room, office setting, cafeteria) in order to gather maximum variation in color composition, lighting conditions, and dynamic background scenes.
- Each gesture lasted for an average of 4:1 seconds; the most complex bloom taking the average of 5 seconds and the simpler zoom gestures taking an average of 3:5 seconds.
- the hand pose detection module (described with reference to FIG. 4 ) is utilized for estimating the hand pose by detecting 21 key-points of hand.
- the key points detected by the hand pose detection module are shown in FIG. 7 .
- the 21 key-points detected by the hand pose detection module are shown as an overlay on the input images while testing the gesture recognition system.
- the 3D coordinate values of these 21 key-points are then fed to the LSTM network for gesture classification.
- the gesture recognition system utilizes the dataset of 420 videos for training and testing the LSTM classification network. While training, each of the 400 videos from the train set is sampled into 100 frames spread evenly across the duration for feeding into the LSTM network. With a batch size of 5 and a validation split of 70:30, the LSTM network is trained for 300 epochs taking around 11 hours on the GPU set-up. An accuracy of 91% is achieved on the validation split while training the network. Further, the model is tested on a test set of 80 videos. Table 1 shows a confusion matrix for the experiments. An accuracy of 87:5% with 9 cases of misclassification out of 80. The presence of a dynamic hand gesture is detected when the probability of a dynamic hand gesture is more than 85% using the following equation:
- ⁇ (s) i is the predicted probability for the ith class.
- the recognized dynamic hand gesture is communicated to the smartphone. Otherwise, no gesture-detection is reported.
- Table 1 below illustrates Confusion matrix for the gesture recognition system yielding an accuracy of 87:5% with 9 cases of mis-classification out of 80
- the disclosed LSTM-only architecture is capable of delivering frame rates of up to 107 on GPU implementation.
- the hand pose estimation network works at 9 FPS.
- the hand pose estimation network is allowed to drop frames; the latest frame received at the server is fed to the network.
- the 3D coordinate values are interpolated before feeding them to the LSTM network to get 100 data-points.
- This enables the framework to dynamically adapt to GPU performance, hence minimizing the recognition time after completion the gesture.
- the average response time of the proposed framework is found to be 0:8 s on the GPU configuration.
- FIG. 8 is a block diagram of an exemplary computer system 801 for implementing embodiments consistent with the present disclosure.
- the computer system 801 may be implemented in alone or in combination of components of the system 302 ( FIG. 3 ). Variations of computer system 801 may be used for implementing the devices included in this disclosure.
- Computer system 801 may comprise a central processing unit (“CPU” or “hardware processor”) 802 .
- the hardware processor 802 may comprise at least one data processor for executing program components for executing user- or system-generated requests.
- the processor may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
- the processor may include a microprocessor, such as AMD AthlonTM, DuronTM or OpteronTM, ARM's application, embedded or secure processors, IBM PowerPCTM, Intel's Core, ItaniumTM, XeonTM, CeleronTM or other line of processors, etc.
- the processor 802 may be implemented using mainframe, distributed processor, multi-core, parallel, grid, or other architectures. Some embodiments may utilize embedded technologies like application specific integrated circuits (ASICs), digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), etc.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- FPGAs Field Programmable Gate Arrays
- I/O Processor 802 may be disposed in communication with one or more input/output (I/O) devices via I/O interface 803 .
- the I/O interface 803 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.11 a/b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
- CDMA code-division multiple access
- HSPA+ high-speed packet access
- GSM global system for mobile communications
- LTE long-term evolution
- WiMax wireless wide area network
- the computer system 801 may communicate with one or more I/O devices.
- the input device 804 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, sensor (e.g., accelerometer, light sensor, GPS, gyroscope, proximity sensor, or the like), stylus, scanner, storage device, transceiver, video device/source, visors, etc.
- Output device 805 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, or the like), audio speaker, etc.
- a transceiver 806 may be disposed in connection with the processor 802 . The transceiver may facilitate various types of wireless transmission or reception.
- the transceiver may include an antenna operatively connected to a transceiver chip (e.g., Texas Instruments WiLink WL1283, Broadcom BCM4750IUB8, Infineon Technologies X-Gold 618-PMB9800, or the like), providing IEEE 802.11a/b/g/n, Bluetooth, FM, global positioning system (GPS), 2G/3G HSDPA/HSUPA communications, etc.
- a transceiver chip e.g., Texas Instruments WiLink WL1283, Broadcom BCM4750IUB8, Infineon Technologies X-Gold 618-PMB9800, or the like
- IEEE 802.11a/b/g/n e.g., Texas Instruments WiLink WL1283, Broadcom BCM4750IUB8, Infineon Technologies X-Gold 618-PMB9800, or the like
- IEEE 802.11a/b/g/n e.g., Bluetooth, FM, global positioning system (GPS), 2G/3G HSDPA/HS
- the processor 802 may be disposed in communication with a communication network 808 via a network interface 807 .
- the network interface 807 may communicate with the communication network 808 .
- the network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/Internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
- the communication network 808 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc.
- the computer system 801 may communicate with devices 809 and 810 .
- These devices may include, without limitation, personal computer(s), server(s), fax machines, printers, scanners, various mobile devices such as cellular telephones, smartphones (e.g., Apple iPhone, Blackberry, Android-based phones, etc.), tablet computers, eBook readers (Amazon Kindle, Nook, etc.), laptop computers, notebooks, gaming consoles (Microsoft Xbox, Nintendo DS, Sony PlayStation, etc.), or the like.
- the computer system 801 may itself embody one or more of these devices.
- the processor 802 may be disposed in communication with one or more memory devices (e.g., RAM 713 , ROM 714 , etc.) via a storage interface 812 .
- the storage interface may connect to memory devices including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), integrated drive electronics (IDE), IEEE-1394, universal serial bus (USB), fiber channel, small computer systems interface (SCSI), etc.
- the memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, redundant array of independent discs (RAID), solid-state memory devices, solid-state drives, etc. Variations of memory devices may be used for implementing, for example, any databases utilized in this disclosure.
- the memory devices may store a collection of program or database components, including, without limitation, an operating system 816 , user interface application 817 , user/application data 818 (e.g., any data variables or data records discussed in this disclosure), etc.
- the operating system 816 may facilitate resource management and operation of the computer system 801 .
- Examples of operating systems include, without limitation, Apple Macintosh OS X, Unix, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu, etc.), IBM OS/2, Microsoft Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry OS, or the like.
- User interface 817 may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities.
- user interfaces may provide computer interaction interface elements on a display system operatively connected to the computer system 801 , such as cursors, icons, check boxes, menus, strollers, windows, widgets, etc.
- Graphical user interfaces may be employed, including, without limitation, Apple Macintosh operating systems' Aqua, IBM OS/2, Microsoft Windows (e.g., Aero, Metro, etc.), Unix X-Windows, web interface libraries (e.g., ActiveX, Java, Javascript, AJAX, HTML, Adobe Flash, etc.), or the like.
- computer system 801 may store user/application data 818 , such as the data, variables, records, etc. as described in this disclosure.
- databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase.
- databases may be implemented using standardized data structures, such as an array, hash, linked list, structured text file (e.g., XML), table, or as object-oriented databases (e.g., using ObjectStore, Poet, Zope, etc.).
- object-oriented databases e.g., using ObjectStore, Poet, Zope, etc.
- Such databases may be consolidated or distributed, sometimes among the various computer systems discussed above in this disclosure. It is to be understood that the structure and operation of any computer or database component may be combined, consolidated, or distributed in any working combination.
- the server, messaging and instructions transmitted or received may emanate from hardware, including operating system, and program code (i.e., application code) residing in a cloud implementation.
- program code i.e., application code
- one or more of the systems and methods provided herein may be suitable for cloud-based implementation.
- some or all of the data used in the disclosed methods may be sourced from or stored on any cloud computing platform.
- Various embodiments disclose marker-less dynamic hand gesture recognition method and system for gesture recognition in ego-centric videos using deep learning approach.
- the disclosed system works with RGB image data only, thereby precluding need of depth information. This can enable wider reach of frugal devices for AR applications.
- the LSTM network is capable of recognizing 4 intuitive hand gestures (Bloom, Click, Zoom-in and Zoom-out) in real-time and has the potential to be extended for more complex recognition tasks by fine tuning the models using more realistic hand gesture data.
- the disclose system is capable of reducing turn-around-time and enhancing accuracy of gesture recognition, as described with reference to example scenario.
- a computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored.
- a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein.
- the term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This U.S. patent application claims priority under 35 U.S.C. § 119 to: India Application No. 201721035650, filed on Oct. 7, 2017. The entire contents of the aforementioned application are incorporated herein by reference.
- This disclosure relates generally to detection of hand gestures, and more particularly to a system and method for detecting interaction of three dimensional dynamic hand gestures with frugal augmented reality (AR) devices such as head-mount devices.
- Wearable Augmented Reality (AR) devices have been exceedingly popular in recent years. The user interaction modalities used in such devices point to the fact that hand gestures form an intuitive means of interaction in ARNR (virtual reality) applications. These devices use a variety of on-board sensors and customized processing chips which often ties the technology to complex and expensive hardware. These devices are tailor made to perform a specific function and are mostly readily unavailable due to their exorbitant prices.
- Convention generic platforms, for instance, Microsoft Kinect™ and Leap Motion™ Controller provide the much needed abstraction. The inventors here have recognized several technical problems with such conventional systems, as explained below. Such conventional platforms/device fare poorly in varying light conditions such as direct sunlight, incandescent light and outdoor environments due to the presence of infrared radiation and in the presence of reflective surfaces such as a thick glass and under water.
- Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for hand-gesture recognition is disclosed. The method includes receiving, via one or more hardware processors, a plurality of frames of a media stream of a scene captured from a first person view (FPV) of a user using at least one RGB sensor communicably coupled to a wearable AR device. The media stream includes RGB image data associated with the plurality of frames of the scene. The scene comprises a dynamic hand gesture performed by the user. Further, the method includes estimating, via the one or more hardware processors, a temporal information associated with the dynamic hand gesture from the RGB image data by using a deep learning model. The estimated temporal information is associated with hand poses of the user and comprises a plurality of key-points identified on user's hand in the plurality of frames. Further, the method includes classifying, by using a multi-layered Long Short Term memory (LSTM) classification network, the dynamic hand gesture into at least one predefined gesture class based on the temporal information of the key points, via the one or more hardware processors.
- In another embodiment, a system for gesture recognition is provided. The system includes one or more memories; and one or more hardware processors, the one or more memories coupled to the one or more hardware processors, wherein the at least one processor is capable of executing programmed instructions stored in the one or more memories to receive a plurality of frames of a media stream of a scene captured from a first person view (FPV) of a user using at least one RGB sensor communicably coupled to a wearable AR device. The media stream includes RGB image data associated with the plurality of frames of the scene. The scene includes a dynamic hand gesture performed by the user. The one or more hardware processors are further configured by the instructions to estimate a temporal information associated with the dynamic hand gesture from the RGB image data by using a deep learning model. The estimated temporal information is associated with hand poses of the user and includes a plurality of key-points identified on user's hand in the plurality of frames. Further, the one or more hardware processors are further configured by the instructions to classify, by using a multi-layered LSTM classification network, the dynamic hand gesture into at least one predefined gesture class based on the temporal information of the key points.
- In yet another embodiment, a non-transitory computer-readable medium having embodied thereon a computer program for executing a method for gesture recognition is provided. The method includes receiving a plurality of frames of a media stream of a scene captured from a first person view (FPV) of a user using at least one RGB sensor communicably coupled to a wearable AR device. The media stream includes RGB image data associated with the plurality of frames of the scene. The scene comprises a dynamic hand gesture performed by the user. Further, the method includes estimating a temporal information associated with the dynamic hand gesture from the RGB image data by using a deep learning model. The estimated temporal information is associated with hand poses of the user and comprises a plurality of key-points identified on user's hand in the plurality of frames. Further, the method includes classifying, by using a multi-layered LSTM classification network, the dynamic hand gesture into at least one predefined gesture class based on the temporal information of the key points.
- It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
- The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
-
FIGS. 1A-1D illustrates various examples of dynamic hand gestures according to some embodiments of the present disclosure. -
FIG. 2 illustrates an example system architecture for gesture recognition using deep learning according to some embodiments of the present disclosure. -
FIG. 3 illustrates a network implementation of system for gesture recognition using deep learning according to some embodiments of the present disclosure. -
FIG. 4 illustrates a representative process flow for gesture recognition using deep learning according to some embodiments of the present disclosure. -
FIG. 5 illustrates a process flow for estimating temporal information associated with the dynamic hand gesture according to some embodiments of the present disclosure. -
FIG. 6 illustrates an example multi-layer LSTM network for gesture classification according to some embodiments of the present disclosure. -
FIG. 7 illustrates a plurality of key-points detected by hand pose detection module as overlay on input images according to some embodiments of the present disclosure; and -
FIG. 8 is a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure. - Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
- Augmented reality refers to representation of a view of physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, text, graphics, or video. AR is useful in various applications such as medical, education, entertainment, military, and so on. Wearable AR/VR devices such as the Microsoft Halolens™, Daqri Smart Helmet™, Meta Glasses™ have been exceedingly popular in recent years.
- The user interaction modalities used in such devices point to the fact that hand gestures form an intuitive means of interaction in AR/VR applications. These devices use a variety of on-board sensors and customized processing chips which often ties the technology to complex and expensive hardware. These devices are tailor made to perform a specific function and are not readily available due to their exorbitant prices. Generic platforms such as the Microsoft Kinect™ and Leap Motion™ Controller provide the much needed abstraction but fare poorly in direct sunlight, incandescent light and outdoor environments due to the presence of infrared radiation and in the presence of reflective surfaces such as a thick glass and under water.
- Currently due to advances in powerful processors and high-quality optics in smart mobile electronic devices, such devices have been gaining popularity as appealing and readily available platform for AR/VR applications. For instance, devices such as Google Cardboard™ and Wearality which are video-see-through devices for providing an immersive VR experience.
- Using a stereo-rendered camera feed and overlaid images, audio and text, these devices can also be extended for AR applications. The main motive of using these frugal headsets (Google Cardboard or Wearality with an Android smartphone) was their economic viability, portability and easy scalability to the mass market. However, accessibility of sensors to these head mounted devices (HMDs) is limited to the sensors available on the attached smartphone. Current versions use a magnetic trigger or a conductive lever to trigger a single event hence curtailing the richness of possible user interaction.
- In addition, frequent usage of magnetic trigger and conductive lever leads to wear and tear of the device. Moreover, head tracking in said devices is inconvenient and shifts the focus away from object of interest in user Field of View (FoV). Moreover, such devices offer inaccurate speech recognition in industrial outdoor setting due to ambient noise, Based on the above mentioned technical problems in the conventional devices, hand gestures are typically the preferred mode of interaction as they reduce human-effort and are effective in interacting with the surrounding environment. However, current methods for hand gesture recognition in First Person View (FPV) are constrained to specific use-cases and lacks robustness under realistic conditions because of skin color dependency.
- Various embodiments disclosed herein provides methods and system provide technical solution to the above mentioned technical problems in gesture detection, particularly dynamic hand gesture detection, using deep learning approach. By using deep learning approach, computer vision models can be built that are robust to intra class variations and often surpass human abilities in performing detection and classification tasks. In an embodiment, a system for detecting and classifying complex hand gestures, such as Bloom, Click, Zoom-In, Zoom-Out in FPV for AR applications involving single RGB camera input with-out having built-in depth sensors, is presented. The aforementioned hand gestures are presented in
FIGS. 1A-1D for the ease of understanding. By using the deep learning approach for hand gesture detection, the disclosed method and system overcomes the limitations with existing techniques and opens avenues for rich user-interaction on frugal devices. - Referring now to
FIGS. 1A-1D , various dynamic hand gestures are illustrated. For example,FIG. 1A illustrates a ‘Bloom’ dynamic hand gesture,FIG. 1B illustrates various stages of a ‘click’ dynamic hand gesture,FIG. 1C illustrates various stages of ‘Zoom-in’ dynamic hand gesture, andFIG. 1D illustrates various stages of ‘Zoom-Out’ dynamic hand gesture. Herein, the term ‘dynamic’ 3D hand gesture refers to a hand gesture which is not static but required dynamic motion. In accordance the dynamic hand gestures considered herein such as Bloom, Click, Zoom-in, and Zoom-out are each shown to include multiple stages. For instance, the hand-gesture bloom illustrated inFIG. 1A is performed by followingstages 110 followed bystage 112, which is further followed bystage 114. The bloom hand gesture can be performed for performing a predefined task, for example, a menu display operation. Similarly,FIG. 1B illustrates multiple stages of hand movement to execute/perform click hand gesture, includingstage 120 followed bystage 122, which is further followed bystage 124. The click hand gesture can be performed for performing a predefined task, such as a select/hold operation. Also,FIG. 1C illustrates multiple stages of hand movement to execute Zoom-in hand gesture, includingstage 130 followed bystage 132, which is further followed bystage 134, The Zoom-in hand gesture can be performed zooming into a display, for example that of a scene. The execution/performance of hand gesture Zoom-out is shown inFIG. 1D , whereinstage 140 of hand movement is followed by hand movement instage 142 which is finally followed bystage 144. The zoom-out hand gesture can be performed, for instance for performing a predefined task such as zooming-out of a scene being displayed. - Herein, it will be noted that the aforementioned hand gestures are presented for exemplary purposes and are not intended to limit the embodiments disclosed herein. Various distinct applications and devices can utilize distinct hand gestures to perform various functionalities by utilizing the computations described herewith in various embodiments. Moreover, herein the dynamic hand gesture may correspond to one of a 2D hand gesture and a 3D hand gesture.
- The embodiments disclosed herein presents method and system for detecting complex dynamic hand gestures such as those described and depicted in
FIGS. 1A-1D in First Person View (FPV) for AR applications involving single RGB camera. The system uses RGB image data received from the single RGB camera as the input, without requiring any depth information, thereby precluding the need of additional sophisticated depth sensors and overcoming the limitations of existing techniques. A high level example system architecture for gesture detection in accordance with various embodiments of the present disclosure is presented here with reference toFIG. 2 . - The methods and systems are not limited to the specific embodiments described herein. In addition, the method and system can be practiced independently and separately from other modules and methods described herein. Each device element/module and method can be used in combination with other elements/modules and other methods.
- The manner, in which the system and method for region of interest (ROI) marking using head-mount devices shall be implemented, has been explained in details with respect to the
FIGS. 1 through 5 . While aspects of described methods and systems for ROI marking using head-mount devices can be implemented in any number of different systems, utility environments, and/or configurations, the embodiments are described in the context of the following exemplary system(s). - Referring now to
FIG. 2 ,example system architecture 200 for gesture detection using deep learning is described in accordance with various embodiments of present disclosure. The system architecture is shown to include a device for capturing a media stream in FPV of a user. In a simple form, the discloseddevice 202 may include a (1) single RGB camera, for example, installed in a mobile communication device such as a smart phone, and (2) an AR wearable for example a head mounted AR device. Example of such an AR wearable may include Google cardboard. The media stream captured by the RGB camera in user's FPV (being facilitated by the AR wearable) is sent to asystem 204 for gesture detection. In an embodiment, the system may be embodied in a remote server. In an embodiment, the media stream may be downscaled prior to sending the same to the remote server. Thesystem 204 is adapted to classify the performed gesture in the media stream in order to recognize the gesture. Upon recognition of the gesture, thesystem 204 communicates the result back to the mobile communication device. - Referring now to
FIG. 3 , anetwork implementation 300 ofsystem 302 for gesture detection is illustrated, in accordance with an embodiment of the present subject matter. The system is adapted to receive a media stream having a dynamic hand gesture being performed for executing a predefined task, wherein the media stream is captured in user's FPV. Various hand gestures and the corresponding predefined tasks have been described with reference toFIGS. 1A-1D , Thesystem 302 is capable of detecting the dynamic hand gesture. In an example embodiment, the detection of dynamic hand gesture includes detecting a presence of a stable hand in a hand pose, followed by motion of hand in particular manner so as to execute the predefined task. - Although the present subject matter is explained considering that the
system 302 is implemented for gesture detection via head-mount devices, it may be understood that thesystem 302 may is not restricted to any particular machine or environment. Thesystem 302 can be utilized for a variety of domains where detection of gesture for execution of a task is to be determined. Thesystem 302 may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. - Herein, the
system 302 may capture the media stream, for example, videos and/or images via multiple devices and/or machines 304-1, 304-2 . . . 304-N, collectively referred to asdevices 304 hereinafter. Each of the devices includes least one RGB sensor communicably coupled to a wearable AR device. The RGB sensors may be embodied in media capturing device having such as a handheld electronic device, a mobile phone, a smartphone, a portable computer, a PDA, and so on. In an embodiment, the device may embody a VR camera in addition to the RGB sensor. Alternatively, the device embodying the RGB sensor may be communicably coupled to a wearable AR device to allow capturing of the media stream in a FPV of a user holding the media capturing device and wearing the wearable AR device. Herein, the AR devices are the devices that may embody AR technologies. AR technologies enhance user's perception and help the user to see, hear, and feel the environments in enriched ways. Thedevices 304 are communicatively coupled to thesystem 302 through anetwork 306, and may be capable of transmitting the captured media stream to thesystem 302. - In one implementation, the
network 306 may be a wireless network, a wired network or a combination thereof. Thenetwork 306 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. Thenetwork 306 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, thenetwork 306 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like. - In an embodiment, the
system 302 may be embodied in acomputing device 310. Examples of thecomputing device 310 may include, but are not limited to, a desktop personal computer (PC), a notebook, a laptop, a portable computer, a smart phone, a tablet, and the like. Thesystem 302 may also be associated with adata repository 312 to store the media stream. Additionally or alternatively, thedata repository 312 may be configured to store data and/or information generated during gesture recognition in the media stream. Therepository 312 may be configured outside and communicably coupled to thecomputing device 310 embodying thesystem 302. Alternatively, thedata repository 312 may be configured within thesystem 302. An example implementation of thesystem 302 for gesture recognition in the media stream is described further with reference toFIG. 4 . -
FIG. 4 illustrates a flow diagram of amethod 400 for hand-gesture recognition, according to some embodiments of the present disclosure. Themethod 400 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. Themethod 400 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communication network. The order in which themethod 400 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement themethod 400, or an alternative method. Furthermore, themethod 200 can be implemented in any suitable hardware, software, firmware, or combination thereof. In an embodiment, themethod 400 depicted in the flow chart may be executed by a system, for example, thesystem 302 ofFIG. 3 . In an example embodiment, thesystem 302 may be embodied in an exemplary computer system, for example computer system 801 (FIG. 7 ). Themethod 400 ofFIG. 4 will be explained in more detail below with reference toFIGS. 4-7 . - Referring to
FIG. 4 , in the illustrated embodiment, themethod 400 is initiated when at 402, a user captures a media stream by means of a RGB sensor communicably coupled to awearable AR device 404. Examples of adevice 406 embodying the RGB sensor may include, but is not limited to a smartphone, a PDA, a portable computer and so on. Thewearable AR device 404 may include hardware and software that may be collectively configured to host an AR application for performing AR related functions. For the brevity of description, thedevice 406 incorporating the RGB sensor along with the device running the AR application (or the wearable AR device 404) may hereinafter be collectively referred to as adevice 408. Thedevice 408 captures a media stream of dynamic gestures, for example, gestures as described inFIGS. 1A-1D , performed by the user in the FPV. In an embodiment, the gesture may include a dynamic hand gesture. In an embodiment, the dynamic hand gesture may be one of a 2D and 3D hand gesture. The frames of the media stream captured in FPV are streamed for processing to the gesture recognition system (for example, thesystem 302 ofFIG. 3 ), at 410. In an implementation, the frames obtained from thedevice 408 are first down-scaled, for example to a resolution of, for example, 320×240, to achieve real-time performance by reducing the computational time without compromising on quality. In an embodiment, thedevice 408 streams the frames to the gesture recognition system, for example at 25 FPS. - At 412, the gesture recognition system receives a plurality of frames of the media stream. The frames are the RGB frames acquired from the
device 408. The RGB frames include RGB image data associated with the plurality of frames of the scene. Herein, the RGB image data refers to data corresponding Red, Green and Blue colors associated with the frames. - At 414, a temporal information associated with the dynamic hand gesture is estimated from the RGB image data by using a deep learning model. In an embodiment, the gesture recognition system estimated the temporal information associated with the dynamic hand gesture. The estimated temporal information is associated with hand poses of the user and includes a plurality of key-points identified on user's hand in the plurality of frames. Various hand poses (or stages of dynamic hand gestures) of a user while performing the dynamic hand gestures are described with reference to
FIGS. 1A-1D . A detailed explanation of estimation of the temporal information is described further with reference toFIG. 5 . - Referring to
FIG. 5 , a process flow for estimating temporal information associated with the dynamic hand gesture is illustrated. Herein, the estimation of the temporal information is performed by a handpose estimation module 502. The hand poseestimation module 502 facilitates in estimating the temporal information is based on a deep learning approach that estimates 3D hand pose from a single RGB image, thereby overcoming the challenges caused due to unavailability of depth information in conventional systems. In an embodiment, a deep learning network utilized RGB image data to estimate the temporal information. As described earlier, the temporal information includes a plurality of key-points on hand present in the user's field of view (FoV) in the frames. In an embodiment, the plurality of key-points includes 21 hand key-points comprises 4 key points per finger and one key-point close to wrist of the user's hand. The gesture recognition system detects the plurality of key-points and learns/estimates a plurality of network-implicit 3D articulation prior having the plurality of key points of sample user's hands from sample RGB images using the deep learning network. The plurality of network-implicit 3D articulation priors includes a plurality of key-points determined from a plurality of training sample RGB images of user's hand. Based on the plurality of network-implicit 3D articulation priors, the hand poseestimation module 502 detects the plurality of key-points on the user's hand in the plurality of frames (or RGB images), A detailed process flow for detecting the key-points on the user's hand in the RGB images is illustrated inFIG. 5 . For example, the RGB images such asimages estimation module 502 for estimating temporal information associated with the gesture dynamic hand gesture. The hand poseestimation module 502 estimates the temporal information with the help of deep learning networks including, but not limited to HandSegNet network, PoseNet network and PosePrior network, as described below: - HandSegNet network (marked as 508): The HandSegNet network is a segmentation network to localize hand within the image/frame.
- PoseNet (marked as 510): Given segmented hand mask as the input, the PoseNet localizes 21 hand key-points by estimating 2-dimensional scoremaps for each key-point, containing likelihood information about its spatial location.
- PosePrior (marked as 512): PosePrior network estimates the most likely 3D hand structure conditioned on the score maps obtained from PoseNet.
- In an example embodiment, the aforementioned deep learning networks may be pre-trained in order to estimate the plurality of key-points. For example, in an embodiment, the plurality of key-points may include 21 key-points of user's hand. These networks may be trained using a large-scale 3D hand pose dataset having a plurality of training samples RGB images based on synthetic hand models. The dataset may include a huge data set of photo-realistic renderings of different subjects performing multiple unique actions. For building the dataset, videos of all the user's hands present in the dataset may be lie in an optimum range, for example 40 cm to 65 cm from the camera center which is ideal for FPV use-cases. The light position and intensities may be randomized and the images may be saved using a lossy JPEG compression with losses of up to 40%. The background may be selected chosen at random from various images and the camera location may be chosen randomly in spherical vicinity around the hand for each frame ensuring the robustness of the model to external factors. As described, using the deep learning networks, the hand pose
estimation module 502 detects the plurality of key-points on the user's hand in the plurality of frames based on the plurality of network-implicit 3D articulation priors. The 21 key-points detected by the network are shown as an overlay at 514 on the input video frames 516 (for example, video frames 518, 520, 522) inFIG. 5 . - The hand pose estimation module outputs coordinate values for each of the 21 key-points (also referred to as temporal information) detected on the user's hand. The temporal information is input to a gesture classification network. The gesture classification network includes an LSTM network. The LSTM network classifies the dynamic hand gesture into at least one predefined gesture class based on the key-points, as is explained further with reference to
FIGS. 4 and 6 . - Referring back to
FIG. 4 again, at 416, the dynamic gesture is classified into at least one predefined gesture class based on the temporal information of the key points by using a multi-layered LSTM classification network. In an embodiment, the multi-layered LSTM network includes a first layer, a second layer and a third layer. The first layer includes a LSTM layer consisting of a plurality of LSTM cells to learn long-term dependencies and patterns in 3D coordinates sequence of 21 key-points detected on the user's hand. The second layer includes a flattening layer that makes the temporal data one-dimensional, and the third layer includes a fully connected layer with output scores that correspond to each of the 3D dynamic hand gestures. The output scores are indicative of posterior probability corresponding to the each of the dynamic hand gestures for classification in the at least one predefined gesture class. For example, in the present embodiment if the system is trained for classification of dynamic hand gestures into four classes (for instance, the dynamic hand gestures defined inFIGS. 1A to 1D ), then there would be four output scores determined by the third layer. In alternate embodiments, the number of output scores can vary depending on the number of the gestures classes. Herein, it will be noted that the ability and efficiency of LSTM neural networks in learning long-term dependencies of sequential data facilitates the LSTM network based architecture for the task of gesture classification using spatial location of hand key-points in video frames. An important contribution of the disclosed embodiments towards dynamic gesture recognition is that in disclosed embodiments, inputting only 3D coordinate values of the hand pose in modeling the key-points' variation across frames helps in achieving real-time performance of the framework by reducing computational cost. An example of classification of dynamic gesture into at least one predefined class is described with reference toFIG. 6 . - Referring now to
FIG. 6 , an examplemulti-layer LSTM network 600 for gesture classification task denoting output shape after every layer, is described. TheLSTM network 600 is shown to include three layers, namely, afirst layer 602 including a LSTM layer, asecond layer 604 including a flattening layer, and athird layer 606 including a fully connected layer. Each gesture input is sampled into 100 frames spread evenly across the duration for feeding into theLSTM network 600, making the input ofsize 63×100 (3 coordinate values for each of the 21 key-points) to theLSTM layer 602, as illustrated inFIG. 6 . TheLSTM layer 602 consisting of 200 LSTM cells tries to learn long-term dependencies and patterns in the sequence of coordinates during network training. TheLSTM layer 602 is followed by theflattening layer 604 that makes the data one-dimensional. Theflattening layer 604 is then followed by the fully connectedlayer 606 with 4 output scores that correspond to each of the 4 gestures. - In an embodiment, the LSTM model may be trained for classifying the dynamic hand gesture from amongst the plurality of dynamic hand gestures by using a softmax activation function. The gesture classification module interprets, by using a softmax activation function, output scores as un-normalized log probabilities and squashing the output scores to be between 0 and 1 using the following equation:
-
- where, K denotes number of classes, s is a K×1 vector of scores, an input to softmax function, and j is an index varying from 0 to K−1, and
- σ(s) is K×1 output vector denoting the posterior probabilities associated with each gesture;
- In an embodiment, the LSTM network is trained for classifying the dynamic gesture into one of the gesture class. In an embodiment, training the LSTM network includes computing cross-entropy loss Li of ith training sample of the batch by using following equation:
-
L i =−h j*log(σ(s)j) - where h is a 1×K vector denoting one-hot label of the input; and
Further, a mean of Li is computed over training examples of the batch and is propagated back in the LSTM network to fine tune the LSTM model in training. - Referring back to
FIG. 4 , upon classifying the dynamic gesture into the at least one predefined gesture class at 416, the gesture recognition system communicates the classified at least one predefined gesture class to thedevice 408, thereby enabling thedevice 408 to trigger a pre-defined task in the AR application. - An example scenario illustrating the gesture classification based on disclosed embodiments is described further in description.
- The embodiments herein utilize a data set of Bloom, Click, Zoom-In, Zoom-Out dynamic hand gestures captured in egocentric view. The data set includes 480 videos which include 100 videos per gesture in train set and 20 video per gesture in test set. The data videos in the dataset are high quality videos are captured at a resolution of 320×240, and at 30 FPS. Six users with varying skin colors were involved in the data collection, in the ages ranging from 21 to 55. The videos were recorded in different places (outdoor, indoor, living room, office setting, cafeteria) in order to gather maximum variation in color composition, lighting conditions, and dynamic background scenes. Each gesture lasted for an average of 4:1 seconds; the most complex bloom taking the average of 5 seconds and the simpler zoom gestures taking an average of 3:5 seconds.
- The hand pose detection module (described with reference to
FIG. 4 ) is utilized for estimating the hand pose by detecting 21 key-points of hand. The key points detected by the hand pose detection module are shown inFIG. 7 . - As illustrated in
FIG. 7 , the 21 key-points detected by the hand pose detection module are shown as an overlay on the input images while testing the gesture recognition system. The 3D coordinate values of these 21 key-points are then fed to the LSTM network for gesture classification. - The gesture recognition system utilizes the dataset of 420 videos for training and testing the LSTM classification network. While training, each of the 400 videos from the train set is sampled into 100 frames spread evenly across the duration for feeding into the LSTM network. With a batch size of 5 and a validation split of 70:30, the LSTM network is trained for 300 epochs taking around 11 hours on the GPU set-up. An accuracy of 91% is achieved on the validation split while training the network. Further, the model is tested on a test set of 80 videos. Table 1 shows a confusion matrix for the experiments. An accuracy of 87:5% with 9 cases of misclassification out of 80. The presence of a dynamic hand gesture is detected when the probability of a dynamic hand gesture is more than 85% using the following equation:
-
- where σ(s)i is the predicted probability for the ith class. The recognized dynamic hand gesture is communicated to the smartphone. Otherwise, no gesture-detection is reported.
Table 1 below illustrates Confusion matrix for the gesture recognition system yielding an accuracy of 87:5% with 9 cases of mis-classification out of 80 -
Predicted Gesture True Zoom Zoom Gesture Bloom Click In Out Unclassified Bloom 19 0 0 1 0 Click 0 16 0 4 0 Zoom-In 1 0 18 0 1 Zoom-Out 0 3 0 17 0 - The disclosed LS™-only architecture is capable of delivering frame rates of up to 107 on GPU implementation. However, the hand pose estimation network works at 9 FPS. To ensure maximum throughput of the combined framework, the hand pose estimation network is allowed to drop frames; the latest frame received at the server is fed to the network. The 3D coordinate values are interpolated before feeding them to the LSTM network to get 100 data-points. This enables the framework to dynamically adapt to GPU performance, hence minimizing the recognition time after completion the gesture. As a result, the average response time of the proposed framework is found to be 0:8 s on the GPU configuration. A block diagram of an
exemplary computer system 801 for implementing embodiments -
FIG. 8 is a block diagram of anexemplary computer system 801 for implementing embodiments consistent with the present disclosure. Thecomputer system 801 may be implemented in alone or in combination of components of the system 302 (FIG. 3 ). Variations ofcomputer system 801 may be used for implementing the devices included in this disclosure.Computer system 801 may comprise a central processing unit (“CPU” or “hardware processor”) 802. Thehardware processor 802 may comprise at least one data processor for executing program components for executing user- or system-generated requests. The processor may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc. The processor may include a microprocessor, such as AMD Athlon™, Duron™ or Opteron™, ARM's application, embedded or secure processors, IBM PowerPC™, Intel's Core, Itanium™, Xeon™, Celeron™ or other line of processors, etc. Theprocessor 802 may be implemented using mainframe, distributed processor, multi-core, parallel, grid, or other architectures. Some embodiments may utilize embedded technologies like application specific integrated circuits (ASICs), digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), etc. -
Processor 802 may be disposed in communication with one or more input/output (I/O) devices via I/O interface 803. The I/O interface 803 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.11 a/b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc. - Using the I/
O interface 803, thecomputer system 801 may communicate with one or more I/O devices. For example, theinput device 804 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, sensor (e.g., accelerometer, light sensor, GPS, gyroscope, proximity sensor, or the like), stylus, scanner, storage device, transceiver, video device/source, visors, etc. -
Output device 805 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, or the like), audio speaker, etc. In some embodiments, atransceiver 806 may be disposed in connection with theprocessor 802. The transceiver may facilitate various types of wireless transmission or reception. For example, the transceiver may include an antenna operatively connected to a transceiver chip (e.g., Texas Instruments WiLink WL1283, Broadcom BCM4750IUB8, Infineon Technologies X-Gold 618-PMB9800, or the like), providing IEEE 802.11a/b/g/n, Bluetooth, FM, global positioning system (GPS), 2G/3G HSDPA/HSUPA communications, etc. - In some embodiments, the
processor 802 may be disposed in communication with acommunication network 808 via anetwork interface 807. Thenetwork interface 807 may communicate with thecommunication network 808. The network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/Internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. Thecommunication network 808 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using thenetwork interface 807 and thecommunication network 808, thecomputer system 801 may communicate withdevices computer system 801 may itself embody one or more of these devices. - In some embodiments, the
processor 802 may be disposed in communication with one or more memory devices (e.g., RAM 713, ROM 714, etc.) via astorage interface 812. The storage interface may connect to memory devices including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), integrated drive electronics (IDE), IEEE-1394, universal serial bus (USB), fiber channel, small computer systems interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, redundant array of independent discs (RAID), solid-state memory devices, solid-state drives, etc. Variations of memory devices may be used for implementing, for example, any databases utilized in this disclosure. - The memory devices may store a collection of program or database components, including, without limitation, an
operating system 816, user interface application 817, user/application data 818 (e.g., any data variables or data records discussed in this disclosure), etc. Theoperating system 816 may facilitate resource management and operation of thecomputer system 801. Examples of operating systems include, without limitation, Apple Macintosh OS X, Unix, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu, etc.), IBM OS/2, Microsoft Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry OS, or the like. User interface 817 may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, user interfaces may provide computer interaction interface elements on a display system operatively connected to thecomputer system 801, such as cursors, icons, check boxes, menus, strollers, windows, widgets, etc. Graphical user interfaces (GUIs) may be employed, including, without limitation, Apple Macintosh operating systems' Aqua, IBM OS/2, Microsoft Windows (e.g., Aero, Metro, etc.), Unix X-Windows, web interface libraries (e.g., ActiveX, Java, Javascript, AJAX, HTML, Adobe Flash, etc.), or the like. - In some embodiments,
computer system 801 may store user/application data 818, such as the data, variables, records, etc. as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase. Alternatively, such databases may be implemented using standardized data structures, such as an array, hash, linked list, structured text file (e.g., XML), table, or as object-oriented databases (e.g., using ObjectStore, Poet, Zope, etc.). Such databases may be consolidated or distributed, sometimes among the various computer systems discussed above in this disclosure. It is to be understood that the structure and operation of any computer or database component may be combined, consolidated, or distributed in any working combination. - Additionally, in some embodiments, the server, messaging and instructions transmitted or received may emanate from hardware, including operating system, and program code (i.e., application code) residing in a cloud implementation. Further, it should be noted that one or more of the systems and methods provided herein may be suitable for cloud-based implementation. For example, in some embodiments, some or all of the data used in the disclosed methods may be sourced from or stored on any cloud computing platform.
- Various embodiments disclose marker-less dynamic hand gesture recognition method and system for gesture recognition in ego-centric videos using deep learning approach. The disclosed system works with RGB image data only, thereby precluding need of depth information. This can enable wider reach of frugal devices for AR applications. The LSTM network is capable of recognizing 4 intuitive hand gestures (Bloom, Click, Zoom-in and Zoom-out) in real-time and has the potential to be extended for more complex recognition tasks by fine tuning the models using more realistic hand gesture data. The disclose system is capable of reducing turn-around-time and enhancing accuracy of gesture recognition, as described with reference to example scenario.
- The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
- Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
- It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.
Claims (17)
Li=−hj*log(σ(s)j)
Li=−hj*log(σ(s)j)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201721035650 | 2017-10-07 | ||
IN201721035650 | 2017-10-07 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190107894A1 true US20190107894A1 (en) | 2019-04-11 |
US10429944B2 US10429944B2 (en) | 2019-10-01 |
Family
ID=62904241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/020,245 Active US10429944B2 (en) | 2017-10-07 | 2018-06-27 | System and method for deep learning based hand gesture recognition in first person view |
Country Status (6)
Country | Link |
---|---|
US (1) | US10429944B2 (en) |
EP (1) | EP3467707B1 (en) |
JP (1) | JP6716650B2 (en) |
CN (1) | CN109635621B (en) |
CA (1) | CA3016921C (en) |
IL (1) | IL261580B (en) |
Cited By (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180108165A1 (en) * | 2016-08-19 | 2018-04-19 | Beijing Sensetime Technology Development Co., Ltd | Method and apparatus for displaying business object in video image and electronic device |
US20190204930A1 (en) * | 2018-01-02 | 2019-07-04 | Boe Technology Group Co., Ltd. | Gesture recognition device, gesture recognition method, and gesture recognition system |
CN110286749A (en) * | 2019-05-27 | 2019-09-27 | 华中师范大学 | Hand gesture estimation and method for tracing based on depth data |
US20200005539A1 (en) * | 2018-06-27 | 2020-01-02 | Facebook Technologies, Llc | Visual flairs for emphasizing gestures in artificial-reality environments |
US10635895B2 (en) | 2018-06-27 | 2020-04-28 | Facebook Technologies, Llc | Gesture-based casting and manipulation of virtual content in artificial-reality environments |
CN111273778A (en) * | 2020-02-14 | 2020-06-12 | 北京百度网讯科技有限公司 | Method and device for controlling electronic equipment based on gestures |
US10712901B2 (en) | 2018-06-27 | 2020-07-14 | Facebook Technologies, Llc | Gesture-based content sharing in artificial reality environments |
CN112199994A (en) * | 2020-09-03 | 2021-01-08 | 中国科学院信息工程研究所 | Method and device for detecting interaction between 3D hand and unknown object in RGB video in real time |
CN112686084A (en) * | 2019-10-18 | 2021-04-20 | 宏达国际电子股份有限公司 | Image annotation system |
US10991163B2 (en) | 2019-09-20 | 2021-04-27 | Facebook Technologies, Llc | Projection casting in virtual environments |
CN112767300A (en) * | 2019-10-18 | 2021-05-07 | 宏达国际电子股份有限公司 | Method for automatically generating labeling data of hand and method for calculating skeleton length |
CN113010018A (en) * | 2021-04-20 | 2021-06-22 | 歌尔股份有限公司 | Interaction control method, terminal device and storage medium |
US11061479B2 (en) * | 2018-07-04 | 2021-07-13 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, device and readable storage medium for processing control instruction based on gesture recognition |
CN113239824A (en) * | 2021-05-19 | 2021-08-10 | 北京工业大学 | Dynamic gesture recognition method for multi-modal training single-modal test based on 3D-Ghost module |
US11086476B2 (en) * | 2019-10-23 | 2021-08-10 | Facebook Technologies, Llc | 3D interactions with web content |
US11086406B1 (en) * | 2019-09-20 | 2021-08-10 | Facebook Technologies, Llc | Three-state gesture virtual controls |
CN113296604A (en) * | 2021-05-24 | 2021-08-24 | 北京航空航天大学 | True 3D gesture interaction method based on convolutional neural network |
US11113893B1 (en) | 2020-11-17 | 2021-09-07 | Facebook Technologies, Llc | Artificial reality environment with glints displayed by an extra reality device |
CN113378641A (en) * | 2021-05-12 | 2021-09-10 | 北京工业大学 | Gesture recognition method based on deep neural network and attention mechanism |
US11170576B2 (en) | 2019-09-20 | 2021-11-09 | Facebook Technologies, Llc | Progressive display of virtual objects |
US11176699B2 (en) * | 2019-05-24 | 2021-11-16 | Tencent America LLC | Augmenting reliable training data with CycleGAN for hand pose estimation |
US11176755B1 (en) | 2020-08-31 | 2021-11-16 | Facebook Technologies, Llc | Artificial reality augments and surfaces |
US11178376B1 (en) | 2020-09-04 | 2021-11-16 | Facebook Technologies, Llc | Metering for display modes in artificial reality |
US11176745B2 (en) | 2019-09-20 | 2021-11-16 | Facebook Technologies, Llc | Projection casting in virtual environments |
US11175730B2 (en) | 2019-12-06 | 2021-11-16 | Facebook Technologies, Llc | Posture-based virtual space configurations |
JP2021531589A (en) * | 2019-04-29 | 2021-11-18 | 北京字節跳動網絡技術有限公司Beijing Bytedance Network Technology Co., Ltd. | Motion recognition method, device and electronic device for target |
US11189099B2 (en) | 2019-09-20 | 2021-11-30 | Facebook Technologies, Llc | Global and local mode virtual object interactions |
US20210400206A1 (en) * | 2019-02-19 | 2021-12-23 | Samsung Electronics Co., Ltd. | Electronic device and method for changing magnification of image using multiple cameras |
US11227445B1 (en) | 2020-08-31 | 2022-01-18 | Facebook Technologies, Llc | Artificial reality augments and surfaces |
US11256336B2 (en) | 2020-06-29 | 2022-02-22 | Facebook Technologies, Llc | Integration of artificial reality interaction modes |
US11257280B1 (en) | 2020-05-28 | 2022-02-22 | Facebook Technologies, Llc | Element-based switching of ray casting rules |
CN114185429A (en) * | 2021-11-11 | 2022-03-15 | 杭州易现先进科技有限公司 | Method for positioning gesture key points or estimating gesture, electronic device and storage medium |
US11294475B1 (en) | 2021-02-08 | 2022-04-05 | Facebook Technologies, Llc | Artificial reality multi-modal input switching model |
CN114510142A (en) * | 2020-10-29 | 2022-05-17 | 舜宇光学(浙江)研究院有限公司 | Gesture recognition method based on two-dimensional image, system thereof and electronic equipment |
CN114515146A (en) * | 2020-11-17 | 2022-05-20 | 北京机械设备研究所 | Intelligent gesture recognition method and system based on electrical measurement |
US11409405B1 (en) | 2020-12-22 | 2022-08-09 | Facebook Technologies, Llc | Augment orchestration in an artificial reality environment |
CN114979302A (en) * | 2022-04-22 | 2022-08-30 | 长江大学 | Self-adaptive entropy-based rapid worker action image transmission method and system |
CN115079818A (en) * | 2022-05-07 | 2022-09-20 | 北京聚力维度科技有限公司 | Hand capturing method and system |
US11461973B2 (en) | 2020-12-22 | 2022-10-04 | Meta Platforms Technologies, Llc | Virtual reality locomotion via hand gesture |
US11488320B2 (en) | 2019-07-31 | 2022-11-01 | Samsung Electronics Co., Ltd. | Pose estimation method, pose estimation apparatus, and training method for pose estimation |
WO2023122543A1 (en) * | 2021-12-20 | 2023-06-29 | Canon U.S.A., Inc. | Apparatus and method for gesture recognition stabilization |
US11748944B2 (en) | 2021-10-27 | 2023-09-05 | Meta Platforms Technologies, Llc | Virtual object structures and interrelationships |
US11762952B2 (en) | 2021-06-28 | 2023-09-19 | Meta Platforms Technologies, Llc | Artificial reality application lifecycle |
US11798247B2 (en) | 2021-10-27 | 2023-10-24 | Meta Platforms Technologies, Llc | Virtual object structures and interrelationships |
US11861757B2 (en) | 2020-01-03 | 2024-01-02 | Meta Platforms Technologies, Llc | Self presence in artificial reality |
US11893674B2 (en) | 2021-06-28 | 2024-02-06 | Meta Platforms Technologies, Llc | Interactive avatars in artificial reality |
CN117687517A (en) * | 2024-02-02 | 2024-03-12 | 北京思路智园科技有限公司 | Augmented reality teaching improvement method and system for chemical engineering teaching culture |
US11947862B1 (en) | 2022-12-30 | 2024-04-02 | Meta Platforms Technologies, Llc | Streaming native application content to artificial reality devices |
US11991222B1 (en) | 2023-05-02 | 2024-05-21 | Meta Platforms Technologies, Llc | Persistent call control user interface element in an artificial reality environment |
CN118131915A (en) * | 2024-05-07 | 2024-06-04 | 中国人民解放军国防科技大学 | Man-machine interaction method, device, equipment and storage medium based on gesture recognition |
CN118170258A (en) * | 2024-05-13 | 2024-06-11 | 湖北星纪魅族集团有限公司 | Click operation method and device, electronic equipment and storage medium |
US12008717B2 (en) | 2021-07-07 | 2024-06-11 | Meta Platforms Technologies, Llc | Artificial reality environment control through an artificial reality environment schema |
US12026527B2 (en) | 2022-05-10 | 2024-07-02 | Meta Platforms Technologies, Llc | World-controlled and application-controlled augments in an artificial-reality environment |
CN118351598A (en) * | 2024-06-12 | 2024-07-16 | 山东浪潮科学研究院有限公司 | Gesture motion recognition method, system and storage medium based on GPGPU |
US12056268B2 (en) | 2021-08-17 | 2024-08-06 | Meta Platforms Technologies, Llc | Platformization of mixed reality objects in virtual reality environments |
US12067688B2 (en) | 2022-02-14 | 2024-08-20 | Meta Platforms Technologies, Llc | Coordination of interactions of virtual objects |
US12093447B2 (en) | 2022-01-13 | 2024-09-17 | Meta Platforms Technologies, Llc | Ephemeral artificial reality experiences |
US12097427B1 (en) | 2022-08-26 | 2024-09-24 | Meta Platforms Technologies, Llc | Alternate avatar controls |
US12099693B2 (en) | 2019-06-07 | 2024-09-24 | Meta Platforms Technologies, Llc | Detecting input in artificial reality systems based on a pinch and pull gesture |
US12108184B1 (en) | 2017-07-17 | 2024-10-01 | Meta Platforms, Inc. | Representing real-world objects with a virtual reality environment |
US12106440B2 (en) | 2021-07-01 | 2024-10-01 | Meta Platforms Technologies, Llc | Environment model with surfaces and per-surface volumes |
US12130967B2 (en) | 2023-04-04 | 2024-10-29 | Meta Platforms Technologies, Llc | Integration of artificial reality interaction modes |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110109547A (en) * | 2019-05-05 | 2019-08-09 | 芋头科技(杭州)有限公司 | Order Activiation method and system based on gesture identification |
CN110222580B (en) * | 2019-05-09 | 2021-10-22 | 中国科学院软件研究所 | Human hand three-dimensional attitude estimation method and device based on three-dimensional point cloud |
US11543888B2 (en) | 2019-06-27 | 2023-01-03 | Google Llc | Intent detection with a computing device |
CN110321566B (en) * | 2019-07-10 | 2020-11-13 | 北京邮电大学 | Chinese named entity recognition method and device, computer equipment and storage medium |
CN110543916B (en) * | 2019-09-06 | 2022-02-01 | 天津大学 | Method and system for classifying missing multi-view data |
CN110865704B (en) * | 2019-10-21 | 2021-04-27 | 浙江大学 | Gesture interaction device and method for 360-degree suspended light field three-dimensional display system |
WO2021098543A1 (en) * | 2019-11-20 | 2021-05-27 | Oppo广东移动通信有限公司 | Gesture recognition method and apparatus, and storage medium |
CN111444771B (en) * | 2020-02-27 | 2022-06-21 | 浙江大学 | Gesture preposing real-time identification method based on recurrent neural network |
US11227151B2 (en) * | 2020-03-05 | 2022-01-18 | King Fahd University Of Petroleum And Minerals | Methods and systems for computerized recognition of hand gestures |
CN111523380B (en) * | 2020-03-11 | 2023-06-30 | 浙江工业大学 | Mask wearing condition monitoring method based on face and gesture recognition |
CN111444820B (en) * | 2020-03-24 | 2021-06-04 | 清华大学 | Gesture recognition method based on imaging radar |
US11514605B2 (en) * | 2020-09-29 | 2022-11-29 | International Business Machines Corporation | Computer automated interactive activity recognition based on keypoint detection |
US11804040B2 (en) | 2021-03-17 | 2023-10-31 | Qualcomm Incorporated | Keypoint-based sampling for pose estimation |
WO2022197367A1 (en) * | 2021-03-17 | 2022-09-22 | Qualcomm Technologies, Inc. | Keypoint-based sampling for pose estimation |
US11757951B2 (en) | 2021-05-28 | 2023-09-12 | Vizio, Inc. | System and method for configuring video watch parties with gesture-specific telemojis |
JP2023139535A (en) | 2022-03-22 | 2023-10-04 | キヤノン株式会社 | Gesture recognition apparatus, head-mounted display apparatus, gesture recognition method, program, and storage medium |
JP2023139534A (en) | 2022-03-22 | 2023-10-04 | キヤノン株式会社 | Gesture recognition apparatus, head-mounted display apparatus, gesture recognition method, program, and storage medium |
CN114882443A (en) * | 2022-05-31 | 2022-08-09 | 江苏濠汉信息技术有限公司 | Edge computing system applied to cable accessory construction |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150177842A1 (en) * | 2013-12-23 | 2015-06-25 | Yuliya Rudenko | 3D Gesture Based User Authorization and Device Control Methods |
US9720515B2 (en) * | 2015-01-02 | 2017-08-01 | Wearable Devices Ltd. | Method and apparatus for a gesture controlled interface for wearable devices |
US9953216B2 (en) * | 2015-01-13 | 2018-04-24 | Google Llc | Systems and methods for performing actions in response to user gestures in captured images |
KR101745406B1 (en) * | 2015-09-03 | 2017-06-12 | 한국과학기술연구원 | Apparatus and method of hand gesture recognition based on depth image |
CN106325509A (en) * | 2016-08-19 | 2017-01-11 | 北京暴风魔镜科技有限公司 | Three-dimensional gesture recognition method and system |
-
2018
- 2018-06-25 EP EP18179440.5A patent/EP3467707B1/en active Active
- 2018-06-27 US US16/020,245 patent/US10429944B2/en active Active
- 2018-09-04 IL IL261580A patent/IL261580B/en active IP Right Grant
- 2018-09-06 JP JP2018167317A patent/JP6716650B2/en active Active
- 2018-09-07 CA CA3016921A patent/CA3016921C/en active Active
- 2018-09-20 CN CN201811098719.8A patent/CN109635621B/en active Active
Cited By (86)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180108165A1 (en) * | 2016-08-19 | 2018-04-19 | Beijing Sensetime Technology Development Co., Ltd | Method and apparatus for displaying business object in video image and electronic device |
US11037348B2 (en) * | 2016-08-19 | 2021-06-15 | Beijing Sensetime Technology Development Co., Ltd | Method and apparatus for displaying business object in video image and electronic device |
US12108184B1 (en) | 2017-07-17 | 2024-10-01 | Meta Platforms, Inc. | Representing real-world objects with a virtual reality environment |
US20190204930A1 (en) * | 2018-01-02 | 2019-07-04 | Boe Technology Group Co., Ltd. | Gesture recognition device, gesture recognition method, and gesture recognition system |
US10725553B2 (en) * | 2018-01-02 | 2020-07-28 | Boe Technology Group Co., Ltd. | Gesture recognition device, gesture recognition method, and gesture recognition system |
US20200005539A1 (en) * | 2018-06-27 | 2020-01-02 | Facebook Technologies, Llc | Visual flairs for emphasizing gestures in artificial-reality environments |
US10635895B2 (en) | 2018-06-27 | 2020-04-28 | Facebook Technologies, Llc | Gesture-based casting and manipulation of virtual content in artificial-reality environments |
US11157725B2 (en) | 2018-06-27 | 2021-10-26 | Facebook Technologies, Llc | Gesture-based casting and manipulation of virtual content in artificial-reality environments |
US10712901B2 (en) | 2018-06-27 | 2020-07-14 | Facebook Technologies, Llc | Gesture-based content sharing in artificial reality environments |
US10783712B2 (en) * | 2018-06-27 | 2020-09-22 | Facebook Technologies, Llc | Visual flairs for emphasizing gestures in artificial-reality environments |
US11061479B2 (en) * | 2018-07-04 | 2021-07-13 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, device and readable storage medium for processing control instruction based on gesture recognition |
US12003849B2 (en) | 2019-02-19 | 2024-06-04 | Samsung Electronics Co., Ltd. | Electronic device and method for changing magnification of image using multiple cameras |
US11509830B2 (en) * | 2019-02-19 | 2022-11-22 | Samsung Electronics Co., Ltd. | Electronic device and method for changing magnification of image using multiple cameras |
US20210400206A1 (en) * | 2019-02-19 | 2021-12-23 | Samsung Electronics Co., Ltd. | Electronic device and method for changing magnification of image using multiple cameras |
JP7181375B2 (en) | 2019-04-29 | 2022-11-30 | 北京字節跳動網絡技術有限公司 | Target object motion recognition method, device and electronic device |
JP2021531589A (en) * | 2019-04-29 | 2021-11-18 | 北京字節跳動網絡技術有限公司Beijing Bytedance Network Technology Co., Ltd. | Motion recognition method, device and electronic device for target |
US11176699B2 (en) * | 2019-05-24 | 2021-11-16 | Tencent America LLC | Augmenting reliable training data with CycleGAN for hand pose estimation |
CN110286749A (en) * | 2019-05-27 | 2019-09-27 | 华中师范大学 | Hand gesture estimation and method for tracing based on depth data |
US12099693B2 (en) | 2019-06-07 | 2024-09-24 | Meta Platforms Technologies, Llc | Detecting input in artificial reality systems based on a pinch and pull gesture |
US11488320B2 (en) | 2019-07-31 | 2022-11-01 | Samsung Electronics Co., Ltd. | Pose estimation method, pose estimation apparatus, and training method for pose estimation |
US11468644B2 (en) | 2019-09-20 | 2022-10-11 | Meta Platforms Technologies, Llc | Automatic projection type selection in an artificial reality environment |
US11170576B2 (en) | 2019-09-20 | 2021-11-09 | Facebook Technologies, Llc | Progressive display of virtual objects |
US11947111B2 (en) | 2019-09-20 | 2024-04-02 | Meta Platforms Technologies, Llc | Automatic projection type selection in an artificial reality environment |
US10991163B2 (en) | 2019-09-20 | 2021-04-27 | Facebook Technologies, Llc | Projection casting in virtual environments |
US11176745B2 (en) | 2019-09-20 | 2021-11-16 | Facebook Technologies, Llc | Projection casting in virtual environments |
US11086406B1 (en) * | 2019-09-20 | 2021-08-10 | Facebook Technologies, Llc | Three-state gesture virtual controls |
US11189099B2 (en) | 2019-09-20 | 2021-11-30 | Facebook Technologies, Llc | Global and local mode virtual object interactions |
US11257295B2 (en) | 2019-09-20 | 2022-02-22 | Facebook Technologies, Llc | Projection casting in virtual environments |
US11386578B2 (en) * | 2019-10-18 | 2022-07-12 | Htc Corporation | Image labeling system of a hand in an image |
US12056885B2 (en) * | 2019-10-18 | 2024-08-06 | Htc Corporation | Method for automatically generating hand marking data and calculating bone length |
CN112767300A (en) * | 2019-10-18 | 2021-05-07 | 宏达国际电子股份有限公司 | Method for automatically generating labeling data of hand and method for calculating skeleton length |
CN112686084A (en) * | 2019-10-18 | 2021-04-20 | 宏达国际电子股份有限公司 | Image annotation system |
US11556220B1 (en) * | 2019-10-23 | 2023-01-17 | Meta Platforms Technologies, Llc | 3D interactions with web content |
US11086476B2 (en) * | 2019-10-23 | 2021-08-10 | Facebook Technologies, Llc | 3D interactions with web content |
US11609625B2 (en) | 2019-12-06 | 2023-03-21 | Meta Platforms Technologies, Llc | Posture-based virtual space configurations |
US11175730B2 (en) | 2019-12-06 | 2021-11-16 | Facebook Technologies, Llc | Posture-based virtual space configurations |
US11972040B2 (en) | 2019-12-06 | 2024-04-30 | Meta Platforms Technologies, Llc | Posture-based virtual space configurations |
US11861757B2 (en) | 2020-01-03 | 2024-01-02 | Meta Platforms Technologies, Llc | Self presence in artificial reality |
CN111273778A (en) * | 2020-02-14 | 2020-06-12 | 北京百度网讯科技有限公司 | Method and device for controlling electronic equipment based on gestures |
US11257280B1 (en) | 2020-05-28 | 2022-02-22 | Facebook Technologies, Llc | Element-based switching of ray casting rules |
US11625103B2 (en) | 2020-06-29 | 2023-04-11 | Meta Platforms Technologies, Llc | Integration of artificial reality interaction modes |
US11256336B2 (en) | 2020-06-29 | 2022-02-22 | Facebook Technologies, Llc | Integration of artificial reality interaction modes |
US11227445B1 (en) | 2020-08-31 | 2022-01-18 | Facebook Technologies, Llc | Artificial reality augments and surfaces |
US11176755B1 (en) | 2020-08-31 | 2021-11-16 | Facebook Technologies, Llc | Artificial reality augments and surfaces |
US11651573B2 (en) | 2020-08-31 | 2023-05-16 | Meta Platforms Technologies, Llc | Artificial realty augments and surfaces |
US11769304B2 (en) | 2020-08-31 | 2023-09-26 | Meta Platforms Technologies, Llc | Artificial reality augments and surfaces |
US11847753B2 (en) | 2020-08-31 | 2023-12-19 | Meta Platforms Technologies, Llc | Artificial reality augments and surfaces |
CN112199994A (en) * | 2020-09-03 | 2021-01-08 | 中国科学院信息工程研究所 | Method and device for detecting interaction between 3D hand and unknown object in RGB video in real time |
US11178376B1 (en) | 2020-09-04 | 2021-11-16 | Facebook Technologies, Llc | Metering for display modes in artificial reality |
US11637999B1 (en) | 2020-09-04 | 2023-04-25 | Meta Platforms Technologies, Llc | Metering for display modes in artificial reality |
CN114510142A (en) * | 2020-10-29 | 2022-05-17 | 舜宇光学(浙江)研究院有限公司 | Gesture recognition method based on two-dimensional image, system thereof and electronic equipment |
US11636655B2 (en) | 2020-11-17 | 2023-04-25 | Meta Platforms Technologies, Llc | Artificial reality environment with glints displayed by an extra reality device |
CN114515146A (en) * | 2020-11-17 | 2022-05-20 | 北京机械设备研究所 | Intelligent gesture recognition method and system based on electrical measurement |
US11113893B1 (en) | 2020-11-17 | 2021-09-07 | Facebook Technologies, Llc | Artificial reality environment with glints displayed by an extra reality device |
US11928308B2 (en) | 2020-12-22 | 2024-03-12 | Meta Platforms Technologies, Llc | Augment orchestration in an artificial reality environment |
US11409405B1 (en) | 2020-12-22 | 2022-08-09 | Facebook Technologies, Llc | Augment orchestration in an artificial reality environment |
US11461973B2 (en) | 2020-12-22 | 2022-10-04 | Meta Platforms Technologies, Llc | Virtual reality locomotion via hand gesture |
US11294475B1 (en) | 2021-02-08 | 2022-04-05 | Facebook Technologies, Llc | Artificial reality multi-modal input switching model |
CN113010018A (en) * | 2021-04-20 | 2021-06-22 | 歌尔股份有限公司 | Interaction control method, terminal device and storage medium |
CN113378641A (en) * | 2021-05-12 | 2021-09-10 | 北京工业大学 | Gesture recognition method based on deep neural network and attention mechanism |
CN113239824A (en) * | 2021-05-19 | 2021-08-10 | 北京工业大学 | Dynamic gesture recognition method for multi-modal training single-modal test based on 3D-Ghost module |
CN113296604A (en) * | 2021-05-24 | 2021-08-24 | 北京航空航天大学 | True 3D gesture interaction method based on convolutional neural network |
US11893674B2 (en) | 2021-06-28 | 2024-02-06 | Meta Platforms Technologies, Llc | Interactive avatars in artificial reality |
US11762952B2 (en) | 2021-06-28 | 2023-09-19 | Meta Platforms Technologies, Llc | Artificial reality application lifecycle |
US12106440B2 (en) | 2021-07-01 | 2024-10-01 | Meta Platforms Technologies, Llc | Environment model with surfaces and per-surface volumes |
US12008717B2 (en) | 2021-07-07 | 2024-06-11 | Meta Platforms Technologies, Llc | Artificial reality environment control through an artificial reality environment schema |
US12056268B2 (en) | 2021-08-17 | 2024-08-06 | Meta Platforms Technologies, Llc | Platformization of mixed reality objects in virtual reality environments |
US11748944B2 (en) | 2021-10-27 | 2023-09-05 | Meta Platforms Technologies, Llc | Virtual object structures and interrelationships |
US11935208B2 (en) | 2021-10-27 | 2024-03-19 | Meta Platforms Technologies, Llc | Virtual object structures and interrelationships |
US11798247B2 (en) | 2021-10-27 | 2023-10-24 | Meta Platforms Technologies, Llc | Virtual object structures and interrelationships |
US12086932B2 (en) | 2021-10-27 | 2024-09-10 | Meta Platforms Technologies, Llc | Virtual object structures and interrelationships |
CN114185429A (en) * | 2021-11-11 | 2022-03-15 | 杭州易现先进科技有限公司 | Method for positioning gesture key points or estimating gesture, electronic device and storage medium |
WO2023122543A1 (en) * | 2021-12-20 | 2023-06-29 | Canon U.S.A., Inc. | Apparatus and method for gesture recognition stabilization |
US12093447B2 (en) | 2022-01-13 | 2024-09-17 | Meta Platforms Technologies, Llc | Ephemeral artificial reality experiences |
US12067688B2 (en) | 2022-02-14 | 2024-08-20 | Meta Platforms Technologies, Llc | Coordination of interactions of virtual objects |
CN114979302A (en) * | 2022-04-22 | 2022-08-30 | 长江大学 | Self-adaptive entropy-based rapid worker action image transmission method and system |
CN115079818A (en) * | 2022-05-07 | 2022-09-20 | 北京聚力维度科技有限公司 | Hand capturing method and system |
US12026527B2 (en) | 2022-05-10 | 2024-07-02 | Meta Platforms Technologies, Llc | World-controlled and application-controlled augments in an artificial-reality environment |
US12097427B1 (en) | 2022-08-26 | 2024-09-24 | Meta Platforms Technologies, Llc | Alternate avatar controls |
US11947862B1 (en) | 2022-12-30 | 2024-04-02 | Meta Platforms Technologies, Llc | Streaming native application content to artificial reality devices |
US12130967B2 (en) | 2023-04-04 | 2024-10-29 | Meta Platforms Technologies, Llc | Integration of artificial reality interaction modes |
US11991222B1 (en) | 2023-05-02 | 2024-05-21 | Meta Platforms Technologies, Llc | Persistent call control user interface element in an artificial reality environment |
CN117687517A (en) * | 2024-02-02 | 2024-03-12 | 北京思路智园科技有限公司 | Augmented reality teaching improvement method and system for chemical engineering teaching culture |
CN118131915A (en) * | 2024-05-07 | 2024-06-04 | 中国人民解放军国防科技大学 | Man-machine interaction method, device, equipment and storage medium based on gesture recognition |
CN118170258A (en) * | 2024-05-13 | 2024-06-11 | 湖北星纪魅族集团有限公司 | Click operation method and device, electronic equipment and storage medium |
CN118351598A (en) * | 2024-06-12 | 2024-07-16 | 山东浪潮科学研究院有限公司 | Gesture motion recognition method, system and storage medium based on GPGPU |
Also Published As
Publication number | Publication date |
---|---|
CN109635621B (en) | 2023-04-14 |
EP3467707A1 (en) | 2019-04-10 |
EP3467707B1 (en) | 2024-03-13 |
JP2019071048A (en) | 2019-05-09 |
CN109635621A (en) | 2019-04-16 |
CA3016921C (en) | 2023-06-27 |
JP6716650B2 (en) | 2020-07-01 |
CA3016921A1 (en) | 2019-04-07 |
IL261580B (en) | 2021-06-30 |
IL261580A (en) | 2019-02-28 |
US10429944B2 (en) | 2019-10-01 |
EP3467707C0 (en) | 2024-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10429944B2 (en) | System and method for deep learning based hand gesture recognition in first person view | |
US11126835B2 (en) | Hand detection in first person view | |
EP3686772B1 (en) | On-device classification of fingertip motion patterns into gestures in real-time | |
US11233952B2 (en) | Selective identification and order of image modifiers | |
US20230362232A1 (en) | Content collection navigation and autoforwarding | |
WO2020216054A1 (en) | Sight line tracking model training method, and sight line tracking method and device | |
US11367194B1 (en) | Image segmentation of a video stream | |
US11789582B2 (en) | Content collection navigation queue | |
KR102173123B1 (en) | Method and apparatus for recognizing object of image in electronic device | |
US11443438B2 (en) | Network module and distribution method and apparatus, electronic device, and storage medium | |
US9536161B1 (en) | Visual and audio recognition for scene change events | |
US12008811B2 (en) | Machine learning-based selection of a representative video frame within a messaging application | |
WO2018120082A1 (en) | Apparatus, method and computer program product for deep learning | |
CN117274383A (en) | Viewpoint prediction method and device, electronic equipment and storage medium | |
KR20200127928A (en) | Method and apparatus for recognizing object of image in electronic device | |
US10831360B2 (en) | Telepresence framework for region of interest marking using headmount devices | |
CN111310595A (en) | Method and apparatus for generating information | |
US9727778B2 (en) | System and method for guided continuous body tracking for complex interaction | |
US11501528B1 (en) | Selector input device to perform operations on captured media content items | |
US11863860B2 (en) | Image capture eyewear with context-based sending | |
US20240282058A1 (en) | Generating user interfaces displaying augmented reality graphics | |
Aydin | Leveraging Computer Vision Techniques for Video and Web Accessibility |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TATA CONSULTANCY SERVICES LIMITED, INDIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HEBBALAGUPPE, RAMYA SUGNANA MURTHY;PERLA, RAMAKRISHNA;REEL/FRAME:046217/0438 Effective date: 20171003 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |