WO2015012896A1 - Gesture recognition method and apparatus based on analysis of multiple candidate boundaries - Google Patents

Gesture recognition method and apparatus based on analysis of multiple candidate boundaries Download PDF

Info

Publication number
WO2015012896A1
WO2015012896A1 PCT/US2014/031471 US2014031471W WO2015012896A1 WO 2015012896 A1 WO2015012896 A1 WO 2015012896A1 US 2014031471 W US2014031471 W US 2014031471W WO 2015012896 A1 WO2015012896 A1 WO 2015012896A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
boundaries
sets
candidate
estimates
Prior art date
Application number
PCT/US2014/031471
Other languages
French (fr)
Inventor
Dmitry N. BABIN
Ivan L. MAZURENKO
Alexander A. PETYUSHKO
Aleksey A. LETUNOVSKIY
Denis V. ZAYTSEV
Original Assignee
Lsi Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lsi Corporation filed Critical Lsi Corporation
Publication of WO2015012896A1 publication Critical patent/WO2015012896A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions

Definitions

  • the field relates generally to image processing, and more particularly to processing for recognition of gestures.
  • Image processing is important in a wide variety of different applications, and such processing may involve two-dimensional (2D) images, three-dimensional (3D) images, or combinations of multiple images of different types.
  • a 3D image of a spatial scene may be generated in an image processor using triangulation based on multiple 2D images captured by respective cameras arranged such that each camera has a different view of the scene.
  • a 3D image can be generated directly using a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera.
  • SL structured light
  • ToF time of flight
  • raw image data from an image sensor is usually subject to various preprocessing operations.
  • preprocessing operations may include, for example, contrast enhancement, histogram equalization, noise reduction, edge highlighting and coordinate space transformation, among many others.
  • the preprocessed image data is then subject to additional processing needed to implement gesture recognition for use in applications such as video gaming systems or other systems implementing a gesture-based human-machine interface.
  • an image processing system comprises an image processor configured to identify a plurality of candidate boundaries in an image, to obtain corresponding modified images for respective ones of the candidate boundaries, to apply a mapping function to each of the modified images to generate a corresponding vector, to determine sets of estimates for respective ones of the vectors relative to designated class parameters, and to select a particular one of the candidate boundaries based on the sets of estimates.
  • the designated class parameters may include sets of class parameters for respective ones of a plurality of classes each corresponding to a different gesture to be recognized.
  • the image processor may be further configured to select a particular one of the plurality of classes to recognize the corresponding gesture based on the sets of estimates.
  • the gesture recognition may be performed jointly with the selection of a particular one of the candidate boundaries.
  • the candidate boundaries may comprise candidate palm boundaries associated with a hand in the image.
  • FIG. 1 is a block diagram of an image processing system comprising an image processor configured for palm boundary detection based gesture recognition in an illustrative embodiment.
  • FIG. 2 shows an image of a hand prior to rotation based on determination of main direction.
  • FIG. 3 shows the image of FIG. 2 after rotation and with multiple candidate palm boundaries superimposed on the hand.
  • FIG. 4 illustrates an exemplary training process implemented in the FIG. 1 system.
  • FIG. 5 is a flow diagram of an exemplary palm boundary detection based gesture recognition process implemented in the FIG. 1 system.
  • Embodiments of the invention will be illustrated herein in conjunction with exemplary image processing systems that include image processors or other types of processing devices and implement techniques for gesture recognition based on palm boundary detection. It should be understood, however, that embodiments of the invention are more generally applicable to any image processing system or associated device or technique that involves detecting palm boundaries in one or more images.
  • FIG. 1 shows an image processing system 100 in an embodiment of the invention.
  • the image processing system 100 comprises an image processor 102 that is configured for communication over a network 104 with a plurality of processing devices 106.
  • the image processor 102 implements a gesture recognition (GR) system 1 10.
  • the GR system 1 10 in this embodiment processes input images 111 from one or more image sources and provides corresponding GR-based output 112.
  • the GR-based output 1 12 may be supplied to one or more of the processing devices 106 or to other system components not specifically illustrated in this diagram.
  • the GR system 1 10 more particularly comprises a preprocessing module 114, a palm boundary detection module 1 15, a recognition module 1 16 and an application module 1 17.
  • a training module 1 18 generates class parameters and mapping functions 1 19 that are utilized by the palm boundary detection and recognition modules 1 15 and 1 16 in generating recognition events for processing by the application module 1 17.
  • elements 1 18 and 1 19 may be at least partially implemented within GR system 1 10 in other embodiments.
  • Portions of the GR system 1 10 may be implemented using separate processing layers of the image processor 102. These processing layers comprise at least a portion of what is more generally referred to herein as "image processing circuitry" of the image processor 102.
  • the image processor 102 may comprise a preprocessing layer implementing preprocessing module 114 and a plurality of higher processing layers each configured to implement one or more of palm boundary detection module 1 15, recognition module 1 16 and application module 1 17.
  • Such processing layers may also be referred to herein as respective subsystems of the GR system 110.
  • embodiments of the invention are not limited to recognition of hand gestures, but can instead be adapted for use in a wide variety of other machine vision applications involving gesture recognition, and may comprise different numbers, types and arrangements of layers in other embodiments.
  • preprocessing module 1 14 may be implemented at least in part in an image source comprising a depth imager or other type of imager that provides at least a portion of the input images 11 1. It is also possible that application 1 17 may be implemented on a different processing device than the palm boundary detection module 115 and the recognition module 116, such as one of the processing devices 106.
  • the image processor 102 may itself comprise multiple distinct processing devices, such that the processing modules 1 14, 1 15, 116 and 1 17 of the GR system 110 are implemented using two or more processing devices.
  • the term "image processor" as used herein is intended to be broadly construed so as to encompass these and other arrangements.
  • the preprocessing module 1 14 performs preprocessing operations on received input images 1 1 1 from one or more image sources. This received image data in the present embodiment is assumed to comprise raw image data received from a depth sensor, but other types of received image data may be processed in other embodiments.
  • the preprocessing module 1 14 provides preprocessed image data to the palm boundary detection module 115 and possibly also the recognition module 116.
  • the raw image data received in the preprocessing module 1 14 from the depth sensor may include a stream of frames comprising respective depth images, with each such depth image comprising a plurality of depth image pixels.
  • a given depth image D may be provided to the preprocessing module 114 in a form of matrix of real values.
  • a given such depth image is also referred to herein as a depth map.
  • image is intended to be broadly construed.
  • the image processor 102 may interface with a variety of different images sources and image destinations.
  • the image processor 102 may receive input images 1 1 1 from one or more image sources and provide processed images as part of GR-based output 1 12 to one or more image destinations. At least a subset of such image sources and image destinations may be implemented as least in part utilizing one or more of the processing devices 106. Accordingly, at least a subset of the input images 11 1 may be provided to the image processor 102 over network 104 for processing from one or more of the processing devices 106. Similarly, processed images or other related GR-based output 1 12 may be delivered by the image processor 102 over network 104 to one or more of the processing devices 106. Such processing devices may therefore be viewed as examples of image sources or image destinations as those terms are used herein.
  • a given image source may comprise, for example, a 3D imager such as an SL camera or a ToF camera configured to generate depth images, or a 2D imager configured to generate grayscale images, color images, infrared images or other types of 2D images, It is also possible that a single imager or other image source can provide both a depth image and a corresponding 2D image such as a grayscale image, a color image or an infrared image. For example, certain types of existing 3D cameras are able to produce a depth map of a given scene as well as a 2D image of the same scene.
  • a 3D imager providing a depth map of a given scene can be arranged in proximity to a separate high-resolution video camera or other 2D imager providing a 2D image of substantially the same scene.
  • An image source is a storage device or server that provides images to the image processor 102 for processing.
  • a given image destination may comprise, for example, one or more display screens of a human-machine interface of a computer or mobile phone, or at least one storage device or server that receives processed images from the image processor 102.
  • the image processor 102 may be at least partially combined with at least a subset of the one or more image sources and the one or more image destinations on a common processing device.
  • a given image source and the image processor 102 may be collectively implemented on the same processing device.
  • a given image destination and the image processor 102 may be collectively implemented on the same processing device.
  • the image processor 102 is configured to implement gesture recognition based on palm boundary detection.
  • the input images 1 1 1 may comprise respective depth images generated by a depth imager such as an SL camera or a ToF camera.
  • a depth imager such as an SL camera or a ToF camera.
  • Other types and arrangements of images may be received, processed and generated in other embodiments, including 2D images or combinations of 2D and 3D images.
  • image processor 102 in the FIG. 1 embodiment can be varied in other embodiments. For example, in other embodiments two or more of these modules may be combined into a lesser number of modules.
  • An otherwise conventional image processing integrated circuit or other type of image processing circuitry suitably modified to perform processing operations as disclosed herein may be used to implement at least a portion of one or more of the modules 114, 115, 116, 117, 118 and 1 19 of image processor 102.
  • One possible example of image processing circuitry that may be used in one or more embodiments of the invention is an otherwise conventional graphics processor suitably reconfigured to perform functionality associated with one or more of the modules 1 14, 1 15, 1 16, 1 17, 1 18 and 119.
  • the processing devices 106 may comprise, for example, computers, mobile phones, servers or storage devices, in any combination. One or more such devices also may include, for example, display screens or other user interfaces that are utilized to present images generated by the image processor 102.
  • the processing devices 106 may therefore comprise a wide variety of different destination devices that receive processed image streams or other types of GR-based output 1 12 from the image processor 102 over the network 104, including by way of example at least one server or storage device that receives one or more processed image streams from the image processor 102.
  • the image processor 102 may be at least partially combined with one or more of the processing devices 106.
  • the image processor 102 may be implemented at least in part using a given one of the processing devices 106.
  • a computer or mobile phone may be configured to incorporate the image processor 102 and possibly a given image source.
  • Image sources utilized to provide input images 1 1 1 in the image processing system 100 may therefore comprise cameras or other imagers associated with a computer, mobile phone or other processing device.
  • the image processor 102 may be at least partially combined with one or more image sources or image destinations on a common processing device.
  • the image processor 102 in the present embodiment is assumed to be implemented using at least one processing device and comprises a processor 120 coupled to a memory 122.
  • the processor 120 executes software code stored in the memory 122 in order to control the performance of image processing operations.
  • the image processor 102 also comprises a network interface 124 that supports communication over network 104.
  • the processor 120 may comprise, for example, a microprocessor, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor (DSP), or other similar processing device component, as well as other types and arrangements of image processing circuitry, in any combination.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • CPU central processing unit
  • ALU arithmetic logic unit
  • DSP digital signal processor
  • the memory 122 stores software code for execution by the processor 120 in implementing portions of the functionality of image processor 102, such as portions of modules 1 14 through 1 19.
  • a given such memory that stores software code for execution by a corresponding processor is an example of what is more generally referred to herein as a computer-readable medium or other type of computer program product having computer program code embodied therein, and may comprise, for example, electronic memory such as random access memory (RAM) or read-only memory (ROM), magnetic memory, optical memory, or other types of storage devices in any combination.
  • the processor may comprise portions or combinations of a microprocessor, ASIC, FPGA, CPU, ALU, DSP or other image processing circuitry.
  • embodiments of the invention may be implemented in the form of integrated circuits.
  • identical die are typically formed in a repeated pattern on a surface of a semiconductor wafer.
  • Each die includes an image processor or other image processing circuitry as described herein, and may include other structures or circuits.
  • the individual die are cut or diced from the wafer, then packaged as an integrated circuit.
  • One skilled in the art would know how to dice wafers and package die to produce integrated circuits. Integrated circuits so manufactured are considered embodiments of the invention.
  • image processing system 100 as shown in FIG. 1 is exemplary only, and the system 100 in other embodiments may include other elements in addition to or in place of those specifically shown, including one or more elements of a type commonly found in a conventional implementation of such a system.
  • the image processing system 100 is implemented as a video gaming system or other type of gesture-based system that processes image streams in order to recognize user gestures.
  • the disclosed techniques can be similarly adapted for use in a wide variety of other systems requiring a gesture-based human-machine interface, and can also be applied to other applications, such as machine vision systems in robotics and other industrial applications that utilize gesture recognition.
  • FIG. 2 shows a hand 200 within an image 201.
  • the image 201 may be viewed as one of the input images 1 11 applied to the image processor 102.
  • the hand 200 is angled within the image 201 along an axis corresponding to a main direction 202 of the hand.
  • the preprocessing module 1 14 receives this input image and performs an orientation normalization operation that illustratively involves rotating the image or portions thereof such that the main direction 202 of the hand 200 corresponds to a known direction.
  • a corresponding hand 300 after rotation is shown in FIG. 3 such that the main direction now substantially coincides with the vertical direction.
  • the input image has been adjusted such that the main direction of the hand now has a substantially vertical orientation.
  • the orientation normalization operation used to produce the image of FIG. 3 comprising the rotated hand 300 may be implemented by performing principle component analysis (PCA) to determine the main direction 202 of the hand 200 and then rotating the image 201 by an angle based on the determined main direction.
  • PCA principle component analysis
  • scale normalization may be performed by the preprocessing module 1 14 in conjunction with the above-described orientation normalization.
  • One possible type of scale normalization may involve adjusting the scale of the input image until the ratio of the area occupied by the hand to the total image size matches an average of such ratios for training images in a training database 400 of FIG. 4 used by the training module 1 18.
  • the scale adjustment may be implemented by applying interpolation to the image based on a scale factor.
  • shifting normalizations may be applied, as well as various combinations of these and other normalizations.
  • one or more corresponding normalizing transformations may be applied to a modified image comprising features such as edges that have been extracted from the input image.
  • a given modified image of this type which may be in the form of an edge image or similar feature map, is intended to be encompassed by the term "image" as generally used herein.
  • the palm boundary detection process begins in palm boundary detection module 115.
  • the palm boundary detection process in the present embodiment initially involves generating multiple candidate images each corresponding to a different candidate palm boundary. Palm boundary detection is completed upon selection of a particular one of these candidate palm boundaries for the given input image.
  • the palm boundary detection process is assumed to be integrated with the recognition process, and thus modules 1 15 and 116 may be viewed as collectively performing the associated palm boundary determination and recognition operations.
  • palm boundary as used herein is intended to be broadly construed, so as to encompass linear boundaries or other types of boundaries that denote a peripheral area of a palm of a hand in an image. It is to be appreciated, however, that the disclosed techniques can be adapted for use with other types of boundaries in performing gesture recognition in the image processing system 100. Thus, embodiments of the invention are not limited to use with detection of palm boundaries.
  • the module 1 15 in FIG. 1 can therefore be more generally implemented as a boundary detection module.
  • embodiments of the invention are not limited to use in recognition of hand gestures, but can be applied to other types of gestures as well.
  • the term "gesture” as used herein is therefore intended to be broadly construed.
  • multiple candidate palm boundaries 302 are shown superimposed on the rotated hand 300.
  • the candidate palm boundaries in this example are numbered 1, 2, . , . S- ⁇ , S as indicated.
  • Each of these palm boundaries is characterized by a substantially horizontal line that separates the hand 300 into a first portion above the boundary and a second portion below the boundary.
  • the candidate palm boundaries are therefore generally oriented in a direction perpendicular to the substantially vertical main direction of the rotated hand 300 of FIG. 3.
  • the palm boundary detection process implemented by palm boundary detection module 114 is generally configured to determine which of such multiple candidate palm boundaries is most appropriate for the corresponding input image that contains hand 300.
  • the present embodiment determines the appropriate palm boundary for a given input image by evaluating the multiple candidate palm boundaries. As will be described in more detail below in conjunction with FIG. 5, this process is illustratively performed jointly with classification of the hand gesture, such that the selected palm boundary is the one that in combination with a corresponding classification result provides the highest overall probability relative to the class parameters and mapping functions 1 1 determined by training module 1 18 using images from the training database 400 shown in FIG. 4.
  • the joint selection of a particular palm boundary and a corresponding classification result in the present embodiment is therefore based on training where each training sample includes information about the correct palm boundary within a given training image.
  • the multiple candidate palm boundaries 302 may be determined in a variety of different ways, including, for example, use of fixed, increasing, decreasing or random step sizes between adjacent candidate palm boundaries, as well as combinations of these and possibly other types of inter-boundary step sizes. Although substantially horizontal palm boundaries are used in FIG. 3, other embodiments can use different palm boundaries, such as angled boundaries or combinations of various boundaries of different types.
  • a corresponding image is generated from a given normalized input image / for further processing.
  • the S different candidate palm boundaries are utilized to generate respective different images ,..., Is, where the image I t , 1 ⁇ t ⁇ S, corresponds to the t-th candidate palm boundary, and is the same as the normalized input image / for pixels above the t-th palm boundary, and has all zeros, ones, average background values or other predetermined values as its pixel values at or below the t-th palm boundary.
  • each of the images I ⁇ , ..., Is has the same pixel values as the normalized input image I for all pixels above its corresponding palm boundary, but has predetermined pixel values for all of its pixels at or below that pixel boundary.
  • Each of the images ,..., Is may therefore be viewed as being “cut” into first and second portions at the corresponding palm boundary.
  • Each such modified image may be characterized as comprising first and second portions on opposite sides of its candidate palm boundary with the first portion of the modified image comprising pixels having values that are the same as those of respective corresponding pixels in a first portion of the normalized image, and the second portion of the modified image comprising pixels having values that are different than the values of respective corresponding pixels in a second portion of the normalized image.
  • the first and second portions of the modified image are portions above and below the candidate palm boundary.
  • modified images may be generated based on respective candidate palm boundaries in other embodiments.
  • this further processing makes use of class parameters and mapping functions 1 19 generated by training module 118 using images from the training database 400.
  • the training database 400 may be implemented within image processor 102 possibly utilizing a portion of memory 122 or another suitable storage device or alternatively may be implemented externally to image processor 102 on one or more of the processing devices 106.
  • a GMM is a statistical multidimensional distribution based on a number of weighted multivariate normal distributions. These weighted multivariate normal distributions may
  • p(x) is the probability of vector x
  • M is the number of components or "clusters" in the GMM
  • Pi(x) is the multivariate normal distribution of the i ' -th cluster, i.e. p,(x) ⁇ ⁇ ( ⁇ , , ⁇ ,) , where ⁇ , is an N x 1 mean vector and ⁇ , is an N x N nonnegative-definite covariance matrix such that:
  • T in this equation denotes the transpose operator
  • EM-alg Expectation- Maximization algorithm
  • the K classes may correspond to respective ones of a plurality of different static hand gestures, also referred to herein as hand poses, such as, for example, an open palm as illustrated in FIGS. 2 and 3, a fist, a forefinger or "poke" and so on. Other embodiments can be configured to recognize other types of gestures, including dynamic gestures.
  • the training module 1 18 processes one or more training images from training database 400 for each of these represented classes.
  • the training database 400 should include training images having properly recognized palm boundaries and associated hand gestures in normalized form. For example, these training images should have substantially the same width and height in pixels, and similar orientation and scale, as the normalized images to be processed by the modules 1 15 and 1 16 of the GR system 1 10.
  • the determination of the appropriate palm boundary in each training image may be determined by an expert and annotated accordingly on the image.
  • the training module 118 also generates one or more mapping functions 119B including a mapping function F that when applied to a given normalized input image I from the training database 400 yields a vector x within R N .
  • the value of N is much less than the number of pixels in the image, and so the processing performed by the training module 1 18 could be based on features extracted from the image, such as palm width, height, perimeter, area, central moments, etc.
  • class parameters 119A comprising the optimal parameters for each class and the corresponding mapping function 119B are made accessible to the palm boundary detection module 115 and recognition module 1 16 for use in determining palm boundaries and recognizing gestures in the input images after those images are preprocessed in preprocessing module 114.
  • FIG. 5 an exemplary process is shown for gesture recognition based on palm boundary detection in the image processing system 100 of FIG. 1.
  • the FIG. 5 process is assumed to be implemented by the image processor 102 using its preprocessing module 1 14, palm boundary detection module 115 and recognition module 1 16, as well as class parameters and mapping functions 1 19 provided by training module 1 18, although one or more of the described operations can be performed by other system components in other embodiments.
  • Steps 514, 515 and 516 of the FIG. 5 process generally include preprocessing, palm boundary detection and class recognition operations performed by the respective modules 114, 1 15 and 1 16 of the image processor 102. Other related operations are performed in multiple instances of steps 530, 532 and 534.
  • an orientation normalization operation 502 and a scale normalization operation 504 are applied to the input image J to generate a normalized input image I,
  • the orientation normalization may involve determining main direction 202 of hand 200 within the input image J, possibly using PCA or a similar technique, and then rotating the input image by an amount based on the determined main direction of the hand.
  • respective cut images .., Is are generated for respective ones of the candidate palm boundaries ⁇ ,. ..,S.
  • the image , 1 ⁇ t ⁇ S corresponds to the t-th candidate palm boundary, and is the same as the normalized input image I for pixels above the ⁇ -th palm boundary, and has all zeros, ones, average background values or other predetermined values as its pixel values at or below the t-th palm boundary, such that each of the images , . .., Is has the same pixel values as the normalized input image I for all pixels above its corresponding palm boundary, but has predetermined pixel values for all of its pixels at or below that pixel boundary.
  • each of the images ,..., Is may therefore be viewed as being "cut" at the corresponding palm boundary.
  • the resulting vectors are also referred to herein as feature vectors.
  • Steps 534-tJ generally involve determining sets of probabilistic estimates for respective ones of the vectors J 1 through x s relative to sets of optimal parameters T ⁇ ,..., ⁇ , where 1 ⁇ t ⁇ S and 1 ⁇ j ⁇ K.
  • each of the sets of optimal parameters 7 ⁇ is associated with a corresponding one of a plurality of static hand gestures to be recognized by the GR system 1 10.
  • Each set of probabilistic estimates is determined in this embodiment as a set of estimates p(x' I T j ) for a given value of index / relative to sets of optimal parameters 7 ⁇ where index j takes on integer values between 1 and K.
  • steps 534-1, 1 through 534-1,/i determine a first set of probabilistic estimates p ⁇ x
  • steps 534-5", 1 through 34- S,K determine an S-t set of probabilistic estimates p(x s ⁇ T t ) through p(x s ⁇ T K ) .
  • steps 534 not explicitly shown determine the remaining sets of probabilistic estimates for respective ones of the remaining vectors x 2 through X s'1 .
  • Step 515 utilizes the resulting sets of probabilistic estimates to select a particular one of the candidate palm boundaries. More particularly, the palm boundary is selected in step 515 in accordance with the following equation: b - argmax( max p(x ⁇ T, )) (1) where b denotes the particular palm boundary selected based on the sets of probabilistic estimates, and may take on any integer value between 1 and S.
  • Step 16 utilizes the same sets of probabilistic estimates to select a particular one of the K image classes.
  • NLL estimates may be used in order to simplify arithmetic computations in some embodiments, in which case all instances of "max” should be replaced with corresponding instances of "min” in equations (1) and (2) of respective steps 515 and 516.
  • the term "estimates” as used herein is intended to be broadly construed so as to encompass NLL estimates of the type noted above as well as other types of estimates that may or may not be based on probabilities.
  • GMMs and EM-alg are utilized in the training process in this embodiment, any of a wide variety of other classification techniques may be used in training module 1 18 to determine appropriate class parameters and mapping functions 1 19 for use in palm boundary detection and associated gesture recognition operations.
  • well- known techniques based on decision trees, neural networks, or nearest neighbor classification may be adapted for use in embodiments of the invention.
  • These and other techniques can be applied in a straightforward manner to allow estimation of the likelihood function p(x
  • other types of estimates not necessarily of a probabilistic nature may be utilized.
  • the exemplary processing shown not only determines the palm boundary within a given input image but also performs a recognition function by classifying the corresponding gesture.
  • at least a portion of the processing operations may be viewed as being performed by an integrated palm boundary detection and gesture recognition module.
  • Other embodiments may perform only the palm boundary detection, possibly as part of a preprocessing operation, with recognition being performed as a separate operation based on the detected palm boundary.
  • the FIG. 5 process can be pipelined in a straightforward manner. For example, at least a portion of the steps can be performed using parallel computations, thereby reducing the overall latency of the process for a given input image, and facilitating implementation of the described techniques in real-time image processing applications.
  • T j ) may be calculated independently on parallel processing hardware with intermediate results or final values subsequently combined using the arg max(max(%)) function in steps 515 and 516.
  • At least portions of the GR-based output 1 12 may be further processed in the image processor 102, or supplied to another processing device 106 or image destination, as mentioned previously.
  • Embodiments of the invention provide particularly efficient techniques for boundary detection based gesture recognition.
  • one or more of these embodiments can perform joint boundary detection and gesture recognition that allows a system to obtain both boundary and recognition results at substantially the same time.
  • the boundary determination is integrated with the recognition process, in a manner that facilitates highly efficient parallel implementation using image processing circuitry on one or more processing devices.
  • the disclosed embodiments can be configured to utilize GMMs or a wide variety of other classification techniques.

Abstract

An image processing system comprises an image processor configured to identify a plurality of candidate boundaries in an image, to obtain corresponding modified images for respective ones of the candidate boundaries, to apply a mapping function to each of the modified images to generate a corresponding vector, to determine sets of estimates for respective ones of the vectors relative to designated class parameters, and to select a particular one of the candidate boundaries based on the sets of estimates. The designated class parameters may include sets of class parameters for respective ones of a plurality of classes each corresponding to a different gesture to be recognized. The candidate boundaries may comprise candidate palm boundaries associated with a hand in the image. The image processor may be further configured to select a particular one of the plurality of classes to recognize the corresponding gesture based on the sets of estimates.

Description

GESTURE RECOGNITION METHOD AND APPARATUS
BASED ON ANALYSIS OF MULTIPLE CANDIDATE BOUNDARIES
Field
The field relates generally to image processing, and more particularly to processing for recognition of gestures.
Background
Image processing is important in a wide variety of different applications, and such processing may involve two-dimensional (2D) images, three-dimensional (3D) images, or combinations of multiple images of different types. For example, a 3D image of a spatial scene may be generated in an image processor using triangulation based on multiple 2D images captured by respective cameras arranged such that each camera has a different view of the scene. Alternatively, a 3D image can be generated directly using a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera. These and other 3D images, which are also referred to herein as depth images, are commonly utilized in machine vision applications such as gesture recognition.
In typical conventional arrangements, raw image data from an image sensor is usually subject to various preprocessing operations. Such preprocessing operations may include, for example, contrast enhancement, histogram equalization, noise reduction, edge highlighting and coordinate space transformation, among many others. The preprocessed image data is then subject to additional processing needed to implement gesture recognition for use in applications such as video gaming systems or other systems implementing a gesture-based human-machine interface.
Summary
In one embodiment, an image processing system comprises an image processor configured to identify a plurality of candidate boundaries in an image, to obtain corresponding modified images for respective ones of the candidate boundaries, to apply a mapping function to each of the modified images to generate a corresponding vector, to determine sets of estimates for respective ones of the vectors relative to designated class parameters, and to select a particular one of the candidate boundaries based on the sets of estimates.
By way of example only, the designated class parameters may include sets of class parameters for respective ones of a plurality of classes each corresponding to a different gesture to be recognized. The image processor may be further configured to select a particular one of the plurality of classes to recognize the corresponding gesture based on the sets of estimates. Thus, the gesture recognition may be performed jointly with the selection of a particular one of the candidate boundaries.
In some embodiments, the candidate boundaries may comprise candidate palm boundaries associated with a hand in the image.
Other embodiments of the invention include but are not limited to methods, apparatus, systems, processing devices, integrated circuits, and computer-readable storage media having computer program code embodied therein. Brief Description of the Drawings
FIG. 1 is a block diagram of an image processing system comprising an image processor configured for palm boundary detection based gesture recognition in an illustrative embodiment.
FIG. 2 shows an image of a hand prior to rotation based on determination of main direction.
FIG. 3 shows the image of FIG. 2 after rotation and with multiple candidate palm boundaries superimposed on the hand.
FIG. 4 illustrates an exemplary training process implemented in the FIG. 1 system. FIG. 5 is a flow diagram of an exemplary palm boundary detection based gesture recognition process implemented in the FIG. 1 system.
Detailed Description
Embodiments of the invention will be illustrated herein in conjunction with exemplary image processing systems that include image processors or other types of processing devices and implement techniques for gesture recognition based on palm boundary detection. It should be understood, however, that embodiments of the invention are more generally applicable to any image processing system or associated device or technique that involves detecting palm boundaries in one or more images.
FIG. 1 shows an image processing system 100 in an embodiment of the invention. The image processing system 100 comprises an image processor 102 that is configured for communication over a network 104 with a plurality of processing devices 106. The image processor 102 implements a gesture recognition (GR) system 1 10. The GR system 1 10 in this embodiment processes input images 111 from one or more image sources and provides corresponding GR-based output 112. The GR-based output 1 12 may be supplied to one or more of the processing devices 106 or to other system components not specifically illustrated in this diagram.
The GR system 1 10 more particularly comprises a preprocessing module 114, a palm boundary detection module 1 15, a recognition module 1 16 and an application module 1 17. A training module 1 18 generates class parameters and mapping functions 1 19 that are utilized by the palm boundary detection and recognition modules 1 15 and 1 16 in generating recognition events for processing by the application module 1 17. Although illustratively shown as residing outside the GR system 1 10 in the figure, elements 1 18 and 1 19 may be at least partially implemented within GR system 1 10 in other embodiments.
Portions of the GR system 1 10 may be implemented using separate processing layers of the image processor 102. These processing layers comprise at least a portion of what is more generally referred to herein as "image processing circuitry" of the image processor 102. For example, the image processor 102 may comprise a preprocessing layer implementing preprocessing module 114 and a plurality of higher processing layers each configured to implement one or more of palm boundary detection module 1 15, recognition module 1 16 and application module 1 17. Such processing layers may also be referred to herein as respective subsystems of the GR system 110.
It should be noted, however, that embodiments of the invention are not limited to recognition of hand gestures, but can instead be adapted for use in a wide variety of other machine vision applications involving gesture recognition, and may comprise different numbers, types and arrangements of layers in other embodiments.
Also, certain of the processing modules of the image processor 102 may instead be implemented at least in part on other devices in other embodiments. For example, preprocessing module 1 14 may be implemented at least in part in an image source comprising a depth imager or other type of imager that provides at least a portion of the input images 11 1. It is also possible that application 1 17 may be implemented on a different processing device than the palm boundary detection module 115 and the recognition module 116, such as one of the processing devices 106.
Moreover, it is to be appreciated that the image processor 102 may itself comprise multiple distinct processing devices, such that the processing modules 1 14, 1 15, 116 and 1 17 of the GR system 110 are implemented using two or more processing devices. The term "image processor" as used herein is intended to be broadly construed so as to encompass these and other arrangements. The preprocessing module 1 14 performs preprocessing operations on received input images 1 1 1 from one or more image sources. This received image data in the present embodiment is assumed to comprise raw image data received from a depth sensor, but other types of received image data may be processed in other embodiments. The preprocessing module 1 14 provides preprocessed image data to the palm boundary detection module 115 and possibly also the recognition module 116.
The raw image data received in the preprocessing module 1 14 from the depth sensor may include a stream of frames comprising respective depth images, with each such depth image comprising a plurality of depth image pixels. For example, a given depth image D may be provided to the preprocessing module 114 in a form of matrix of real values. A given such depth image is also referred to herein as a depth map.
A wide variety of other types of images or combinations of multiple images may be used in other embodiments. It should therefore be understood that the term "image" as used herein is intended to be broadly construed.
The image processor 102 may interface with a variety of different images sources and image destinations. For example, the image processor 102 may receive input images 1 1 1 from one or more image sources and provide processed images as part of GR-based output 1 12 to one or more image destinations. At least a subset of such image sources and image destinations may be implemented as least in part utilizing one or more of the processing devices 106. Accordingly, at least a subset of the input images 11 1 may be provided to the image processor 102 over network 104 for processing from one or more of the processing devices 106. Similarly, processed images or other related GR-based output 1 12 may be delivered by the image processor 102 over network 104 to one or more of the processing devices 106. Such processing devices may therefore be viewed as examples of image sources or image destinations as those terms are used herein.
A given image source may comprise, for example, a 3D imager such as an SL camera or a ToF camera configured to generate depth images, or a 2D imager configured to generate grayscale images, color images, infrared images or other types of 2D images, It is also possible that a single imager or other image source can provide both a depth image and a corresponding 2D image such as a grayscale image, a color image or an infrared image. For example, certain types of existing 3D cameras are able to produce a depth map of a given scene as well as a 2D image of the same scene. Alternatively, a 3D imager providing a depth map of a given scene can be arranged in proximity to a separate high-resolution video camera or other 2D imager providing a 2D image of substantially the same scene. Another example of an image source is a storage device or server that provides images to the image processor 102 for processing.
A given image destination may comprise, for example, one or more display screens of a human-machine interface of a computer or mobile phone, or at least one storage device or server that receives processed images from the image processor 102.
It should also be noted that the image processor 102 may be at least partially combined with at least a subset of the one or more image sources and the one or more image destinations on a common processing device. Thus, for example, a given image source and the image processor 102 may be collectively implemented on the same processing device. Similarly, a given image destination and the image processor 102 may be collectively implemented on the same processing device.
In the present embodiment, the image processor 102 is configured to implement gesture recognition based on palm boundary detection.
As noted above, the input images 1 1 1 may comprise respective depth images generated by a depth imager such as an SL camera or a ToF camera. Other types and arrangements of images may be received, processed and generated in other embodiments, including 2D images or combinations of 2D and 3D images.
The particular number and arrangement of modules shown in image processor 102 in the FIG. 1 embodiment can be varied in other embodiments. For example, in other embodiments two or more of these modules may be combined into a lesser number of modules. An otherwise conventional image processing integrated circuit or other type of image processing circuitry suitably modified to perform processing operations as disclosed herein may be used to implement at least a portion of one or more of the modules 114, 115, 116, 117, 118 and 1 19 of image processor 102. One possible example of image processing circuitry that may be used in one or more embodiments of the invention is an otherwise conventional graphics processor suitably reconfigured to perform functionality associated with one or more of the modules 1 14, 1 15, 1 16, 1 17, 1 18 and 119.
The processing devices 106 may comprise, for example, computers, mobile phones, servers or storage devices, in any combination. One or more such devices also may include, for example, display screens or other user interfaces that are utilized to present images generated by the image processor 102. The processing devices 106 may therefore comprise a wide variety of different destination devices that receive processed image streams or other types of GR-based output 1 12 from the image processor 102 over the network 104, including by way of example at least one server or storage device that receives one or more processed image streams from the image processor 102.
Although shown as being separate from the processing devices 106 in the present embodiment, the image processor 102 may be at least partially combined with one or more of the processing devices 106. Thus, for example, the image processor 102 may be implemented at least in part using a given one of the processing devices 106. By way of example, a computer or mobile phone may be configured to incorporate the image processor 102 and possibly a given image source. Image sources utilized to provide input images 1 1 1 in the image processing system 100 may therefore comprise cameras or other imagers associated with a computer, mobile phone or other processing device. As indicated previously, the image processor 102 may be at least partially combined with one or more image sources or image destinations on a common processing device.
The image processor 102 in the present embodiment is assumed to be implemented using at least one processing device and comprises a processor 120 coupled to a memory 122. The processor 120 executes software code stored in the memory 122 in order to control the performance of image processing operations. The image processor 102 also comprises a network interface 124 that supports communication over network 104.
The processor 120 may comprise, for example, a microprocessor, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor (DSP), or other similar processing device component, as well as other types and arrangements of image processing circuitry, in any combination.
The memory 122 stores software code for execution by the processor 120 in implementing portions of the functionality of image processor 102, such as portions of modules 1 14 through 1 19. A given such memory that stores software code for execution by a corresponding processor is an example of what is more generally referred to herein as a computer-readable medium or other type of computer program product having computer program code embodied therein, and may comprise, for example, electronic memory such as random access memory (RAM) or read-only memory (ROM), magnetic memory, optical memory, or other types of storage devices in any combination. As indicated above, the processor may comprise portions or combinations of a microprocessor, ASIC, FPGA, CPU, ALU, DSP or other image processing circuitry.
It should also be appreciated that embodiments of the invention may be implemented in the form of integrated circuits. In a given such integrated circuit implementation, identical die are typically formed in a repeated pattern on a surface of a semiconductor wafer. Each die includes an image processor or other image processing circuitry as described herein, and may include other structures or circuits. The individual die are cut or diced from the wafer, then packaged as an integrated circuit. One skilled in the art would know how to dice wafers and package die to produce integrated circuits. Integrated circuits so manufactured are considered embodiments of the invention.
The particular configuration of image processing system 100 as shown in FIG. 1 is exemplary only, and the system 100 in other embodiments may include other elements in addition to or in place of those specifically shown, including one or more elements of a type commonly found in a conventional implementation of such a system.
For example, in some embodiments, the image processing system 100 is implemented as a video gaming system or other type of gesture-based system that processes image streams in order to recognize user gestures. The disclosed techniques can be similarly adapted for use in a wide variety of other systems requiring a gesture-based human-machine interface, and can also be applied to other applications, such as machine vision systems in robotics and other industrial applications that utilize gesture recognition.
The operation of the image processor 102 will be described in greater detail below in conjunction with FIGS. 2 through 5.
FIG. 2 shows a hand 200 within an image 201. The image 201 may be viewed as one of the input images 1 11 applied to the image processor 102. In this figure, the hand 200 is angled within the image 201 along an axis corresponding to a main direction 202 of the hand. The preprocessing module 1 14 receives this input image and performs an orientation normalization operation that illustratively involves rotating the image or portions thereof such that the main direction 202 of the hand 200 corresponds to a known direction. A corresponding hand 300 after rotation is shown in FIG. 3 such that the main direction now substantially coincides with the vertical direction. Thus, the input image has been adjusted such that the main direction of the hand now has a substantially vertical orientation.
The orientation normalization operation used to produce the image of FIG. 3 comprising the rotated hand 300 may be implemented by performing principle component analysis (PCA) to determine the main direction 202 of the hand 200 and then rotating the image 201 by an angle based on the determined main direction.
Other types of normalization can also be applied. For example, scale normalization may be performed by the preprocessing module 1 14 in conjunction with the above-described orientation normalization. One possible type of scale normalization may involve adjusting the scale of the input image until the ratio of the area occupied by the hand to the total image size matches an average of such ratios for training images in a training database 400 of FIG. 4 used by the training module 1 18. The scale adjustment may be implemented by applying interpolation to the image based on a scale factor.
In addition or in place of to the rotating and scaling normalizations noted above, shifting normalizations may be applied, as well as various combinations of these and other normalizations.
In some embodiments, instead of applying rotating, scaling, shifting or other normalizations to the input image itself, one or more corresponding normalizing transformations may be applied to a modified image comprising features such as edges that have been extracted from the input image. A given modified image of this type, which may be in the form of an edge image or similar feature map, is intended to be encompassed by the term "image" as generally used herein.
After application of any appropriate normalizations in preprocessing module 1 14 as described above, the palm boundary detection process begins in palm boundary detection module 115. The palm boundary detection process in the present embodiment initially involves generating multiple candidate images each corresponding to a different candidate palm boundary. Palm boundary detection is completed upon selection of a particular one of these candidate palm boundaries for the given input image. In this embodiment, the palm boundary detection process is assumed to be integrated with the recognition process, and thus modules 1 15 and 116 may be viewed as collectively performing the associated palm boundary determination and recognition operations.
The term "palm boundary" as used herein is intended to be broadly construed, so as to encompass linear boundaries or other types of boundaries that denote a peripheral area of a palm of a hand in an image. It is to be appreciated, however, that the disclosed techniques can be adapted for use with other types of boundaries in performing gesture recognition in the image processing system 100. Thus, embodiments of the invention are not limited to use with detection of palm boundaries. The module 1 15 in FIG. 1 can therefore be more generally implemented as a boundary detection module.
Also, embodiments of the invention are not limited to use in recognition of hand gestures, but can be applied to other types of gestures as well. The term "gesture" as used herein is therefore intended to be broadly construed.
Referring again to FIG. 3, multiple candidate palm boundaries 302 are shown superimposed on the rotated hand 300. The candidate palm boundaries in this example are numbered 1, 2, . , . S-\, S as indicated. Each of these palm boundaries is characterized by a substantially horizontal line that separates the hand 300 into a first portion above the boundary and a second portion below the boundary. The candidate palm boundaries are therefore generally oriented in a direction perpendicular to the substantially vertical main direction of the rotated hand 300 of FIG. 3. The palm boundary detection process implemented by palm boundary detection module 114 is generally configured to determine which of such multiple candidate palm boundaries is most appropriate for the corresponding input image that contains hand 300.
Accordingly, the present embodiment determines the appropriate palm boundary for a given input image by evaluating the multiple candidate palm boundaries. As will be described in more detail below in conjunction with FIG. 5, this process is illustratively performed jointly with classification of the hand gesture, such that the selected palm boundary is the one that in combination with a corresponding classification result provides the highest overall probability relative to the class parameters and mapping functions 1 1 determined by training module 1 18 using images from the training database 400 shown in FIG. 4. The joint selection of a particular palm boundary and a corresponding classification result in the present embodiment is therefore based on training where each training sample includes information about the correct palm boundary within a given training image.
The multiple candidate palm boundaries 302 may be determined in a variety of different ways, including, for example, use of fixed, increasing, decreasing or random step sizes between adjacent candidate palm boundaries, as well as combinations of these and possibly other types of inter-boundary step sizes. Although substantially horizontal palm boundaries are used in FIG. 3, other embodiments can use different palm boundaries, such as angled boundaries or combinations of various boundaries of different types.
For each of the candidate palm boundaries, a corresponding image is generated from a given normalized input image / for further processing. In this embodiment, the S different candidate palm boundaries are utilized to generate respective different images ,..., Is, where the image It, 1 < t < S, corresponds to the t-th candidate palm boundary, and is the same as the normalized input image / for pixels above the t-th palm boundary, and has all zeros, ones, average background values or other predetermined values as its pixel values at or below the t-th palm boundary.
Thus, each of the images I\, ..., Is has the same pixel values as the normalized input image I for all pixels above its corresponding palm boundary, but has predetermined pixel values for all of its pixels at or below that pixel boundary. Each of the images ,..., Is may therefore be viewed as being "cut" into first and second portions at the corresponding palm boundary. These images are examples of what are more generally referred to herein as "cut images" or still more generally "modified images" where the modifications are based on the corresponding palm boundaries.
Each such modified image may be characterized as comprising first and second portions on opposite sides of its candidate palm boundary with the first portion of the modified image comprising pixels having values that are the same as those of respective corresponding pixels in a first portion of the normalized image, and the second portion of the modified image comprising pixels having values that are different than the values of respective corresponding pixels in a second portion of the normalized image. In the more particular example given above, the first and second portions of the modified image are portions above and below the candidate palm boundary.
Other types of modified images may be generated based on respective candidate palm boundaries in other embodiments.
Additional details regarding the further processing of cut images Is will be described below in conjunction with FIG. 5. As indicated previously, this further processing makes use of class parameters and mapping functions 1 19 generated by training module 118 using images from the training database 400. The training database 400 may be implemented within image processor 102 possibly utilizing a portion of memory 122 or another suitable storage device or alternatively may be implemented externally to image processor 102 on one or more of the processing devices 106.
It will be assumed that the palm boundary detection and recognition processes implemented in some embodiments of the FIG. 1 system are based on Gaussian Mixture Models (GMMs) although a wide variety of other classification techniques can be used in other embodiments.
A GMM is a statistical multidimensional distribution based on a number of weighted multivariate normal distributions. These weighted multivariate normal distributions may
M
collectively be of the form p x) = wlpj{ ) , where: x is an N-dimensional vector x = (x\, ...,XN) in the space RN;
p(x) is the probability of vector x;
M is the number of components or "clusters" in the GMM;
M
Wi is the weight of the c'-th cluster where wl≥ 0,^ wi = 1 ; Pi(x) is the multivariate normal distribution of the i'-th cluster, i.e. p,(x) ~ Ν(μ, , Ω,) , where μ, is an N x 1 mean vector and Ω, is an N x N nonnegative-definite covariance matrix such that:
Figure imgf000013_0001
where T in this equation denotes the transpose operator.
Assume that there are L observations X = (x . . ., xL), where each xJ 1 <j < L, is an N- dimensional vector in RN, i.e. xJ = (x> , XJN)- Construction of the GMM in this case may be characterized as an optimization problem that maximizes the overall probabilities of the
L
observations, i.e. arg max ∑p(xJ) .
This optimization problem may be solved using the well-known Expectation- Maximization algorithm (EM-alg). EM-alg is an iterative algorithm and may be used to find and adjust the above-noted distribution parameters ιμ, , μ, ,Ω, s for i = 1 , . .. The EM-alg generally involves the following steps:
1. Fill parameters with random values.
2. Expectation step: using observations and parameters from the previous step estimate log-likelihood.
3. Maximization step: find parameters that maximize log-likelihood and update them. In the context of the FIG. 1 system it may be further assumed that there are multiple observations for each of a plurality of classes corresponding to respective static hand gestures to be recognized by the GR system 1 10. More particularly, assume there are K classes of observed data, with Lc observations for each class c, where 1 < c < K. In such an arrangement, for each class c the above-described EM-alg may be used to find corresponding optimal parameters ^. = {>ν,£?, μ' , Ω; }^] , and for any vector x to be classified the recognition result or target class is given by cx = arg max p(x \ TC) .
c
As indicated above, the K classes may correspond to respective ones of a plurality of different static hand gestures, also referred to herein as hand poses, such as, for example, an open palm as illustrated in FIGS. 2 and 3, a fist, a forefinger or "poke" and so on. Other embodiments can be configured to recognize other types of gestures, including dynamic gestures. The training module 1 18 processes one or more training images from training database 400 for each of these represented classes. The training database 400 should include training images having properly recognized palm boundaries and associated hand gestures in normalized form. For example, these training images should have substantially the same width and height in pixels, and similar orientation and scale, as the normalized images to be processed by the modules 1 15 and 1 16 of the GR system 1 10. The determination of the appropriate palm boundary in each training image may be determined by an expert and annotated accordingly on the image.
As illustrated in FIG. 4, the training module 1 18 processes images from the training database 400 in order to generate class parameters 1 19A including optimal parameters 7) for all of the classes j = 1, K. The training module 118 also generates one or more mapping functions 119B including a mapping function F that when applied to a given normalized input image I from the training database 400 yields a vector x within RN. Typically, the value of N is much less than the number of pixels in the image, and so the processing performed by the training module 1 18 could be based on features extracted from the image, such as palm width, height, perimeter, area, central moments, etc.
The mapping function F(I) = x = (x\, XN) generated by the training module 1 18 is applied to all Lc images from the class c, and then the GMM for the class c is constructed by applying the above-described EM-alg to find the optimal parameters TC = {w- ', μ- ',Ω· }^, for the class c. This process is repeated for each of the classes, resulting in K sets of optimal parameters Τ\, .. .,Τκ. As noted above, the class parameters 119A comprising the optimal parameters for each class and the corresponding mapping function 119B are made accessible to the palm boundary detection module 115 and recognition module 1 16 for use in determining palm boundaries and recognizing gestures in the input images after those images are preprocessed in preprocessing module 114.
Referring now to FIG. 5, an exemplary process is shown for gesture recognition based on palm boundary detection in the image processing system 100 of FIG. 1. The FIG. 5 process is assumed to be implemented by the image processor 102 using its preprocessing module 1 14, palm boundary detection module 115 and recognition module 1 16, as well as class parameters and mapping functions 1 19 provided by training module 1 18, although one or more of the described operations can be performed by other system components in other embodiments.
It is further assumed in this embodiment that the input images 1 1 1 received in the image processor 102 from one or more image sources comprise an input depth image 500 more particularly denoted as image J. Steps 514, 515 and 516 of the FIG. 5 process generally include preprocessing, palm boundary detection and class recognition operations performed by the respective modules 114, 1 15 and 1 16 of the image processor 102. Other related operations are performed in multiple instances of steps 530, 532 and 534.
In the preprocessing step 514, an orientation normalization operation 502 and a scale normalization operation 504 are applied to the input image J to generate a normalized input image I, As previously described in conjunction with FIGS. 2 and 3, the orientation normalization may involve determining main direction 202 of hand 200 within the input image J, possibly using PCA or a similar technique, and then rotating the input image by an amount based on the determined main direction of the hand.
Multiple candidate palm boundaries are then determined in the manner previously described. It is assumed that there are S substantially horizontal candidate palm boundaries of a type similar to that illustrated in FIG. 3.
In steps 530-1 through 530-5, respective cut images .., Is are generated for respective ones of the candidate palm boundaries \,. ..,S. As noted above, the image , 1 < t < S, corresponds to the t-th candidate palm boundary, and is the same as the normalized input image I for pixels above the ί-th palm boundary, and has all zeros, ones, average background values or other predetermined values as its pixel values at or below the t-th palm boundary, such that each of the images , . .., Is has the same pixel values as the normalized input image I for all pixels above its corresponding palm boundary, but has predetermined pixel values for all of its pixels at or below that pixel boundary. Again, each of the images ,..., Is may therefore be viewed as being "cut" at the corresponding palm boundary.
In steps 532-1 through 532-S, vectors x] through xs are obtained by applying the mapping function F to the respective images I\,. .., Is, i.e. vectors x = F(I\), , .., xs= F(Is). The resulting vectors are also referred to herein as feature vectors.
Steps 534-tJ generally involve determining sets of probabilistic estimates for respective ones of the vectors J 1 through xs relative to sets of optimal parameters T\,...,Τκ, where 1 < t < S and 1 <j < K. As mentioned above, each of the sets of optimal parameters 7} is associated with a corresponding one of a plurality of static hand gestures to be recognized by the GR system 1 10. Each set of probabilistic estimates is determined in this embodiment as a set of estimates p(x' I Tj) for a given value of index / relative to sets of optimal parameters 7} where index j takes on integer values between 1 and K. Thus, steps 534-1, 1 through 534-1,/i determine a first set of probabilistic estimates p{x | Tj) through p(xl \ TK) . Similarly, steps 534-5", 1 through 34- S,K determine an S-t set of probabilistic estimates p(xs \ Tt ) through p(xs \ TK) . Other instances of steps 534 not explicitly shown determine the remaining sets of probabilistic estimates for respective ones of the remaining vectors x2 through Xs'1.
Step 515 utilizes the resulting sets of probabilistic estimates to select a particular one of the candidate palm boundaries. More particularly, the palm boundary is selected in step 515 in accordance with the following equation: b - argmax( max p(x \ T, )) (1)
Figure imgf000016_0001
where b denotes the particular palm boundary selected based on the sets of probabilistic estimates, and may take on any integer value between 1 and S.
Step 16 utilizes the same sets of probabilistic estimates to select a particular one of the K image classes. This recognition step more particularly recognizes a given one of the K image classes corresponding to a particular static hand gesture within the input image J, in accordance with the following equation: c = arg max(max p(x' \ T )) (2) j=l ..K ' = '■■« 1 where c denotes the particular class selected based on the sets of probabilistic estimates, and may take on any integer value between 1 and K.
In other embodiments, other types of estimates may be used. For example, negative log- likelihood (NLL) estimates -\og p(x' \ Tj) may be used in order to simplify arithmetic computations in some embodiments, in which case all instances of "max" should be replaced with corresponding instances of "min" in equations (1) and (2) of respective steps 515 and 516. The term "estimates" as used herein is intended to be broadly construed so as to encompass NLL estimates of the type noted above as well as other types of estimates that may or may not be based on probabilities.
Also, although GMMs and EM-alg are utilized in the training process in this embodiment, any of a wide variety of other classification techniques may be used in training module 1 18 to determine appropriate class parameters and mapping functions 1 19 for use in palm boundary detection and associated gesture recognition operations. For example, well- known techniques based on decision trees, neural networks, or nearest neighbor classification may be adapted for use in embodiments of the invention. These and other techniques can be applied in a straightforward manner to allow estimation of the likelihood function p(x | Γ.) for a given feature vector x and a set of optimal parameters 2} for class j. Again, other types of estimates not necessarily of a probabilistic nature may be utilized.
In the FIG. 5 embodiment, the exemplary processing shown not only determines the palm boundary within a given input image but also performs a recognition function by classifying the corresponding gesture. Thus, at least a portion of the processing operations may be viewed as being performed by an integrated palm boundary detection and gesture recognition module. Other embodiments may perform only the palm boundary detection, possibly as part of a preprocessing operation, with recognition being performed as a separate operation based on the detected palm boundary.
The FIG. 5 process can be pipelined in a straightforward manner. For example, at least a portion of the steps can be performed using parallel computations, thereby reducing the overall latency of the process for a given input image, and facilitating implementation of the described techniques in real-time image processing applications.
As a more particular example, the estimates p(x' | Tj) may be calculated independently on parallel processing hardware with intermediate results or final values subsequently combined using the arg max(max(...)) function in steps 515 and 516.
At least portions of the GR-based output 1 12 may be further processed in the image processor 102, or supplied to another processing device 106 or image destination, as mentioned previously.
It is to be appreciated that the particular process steps used in the embodiment of FIG. 5 are exemplary only, and other embodiments can utilize different types and arrangements of image processing operations. For example, the particular manner in which the feature vectors and corresponding sets of estimates are generated can be varied in other embodiments. Also, the computations in steps 515 and 16 are equivalent and therefore can be combined into a single computation in other embodiments. In addition, steps indicated as being performed serially in the figure can be performed at least in part in parallel with one or more other steps in other embodiments. The particular steps and their interconnection as illustrated in FIG, 5 should therefore be viewed as one possible arrangement of process steps in one embodiment, and other embodiments may include additional or alternative process steps arranged in different processing orders.
Embodiments of the invention provide particularly efficient techniques for boundary detection based gesture recognition. For example, one or more of these embodiments can perform joint boundary detection and gesture recognition that allows a system to obtain both boundary and recognition results at substantially the same time. In such an embodiment, the boundary determination is integrated with the recognition process, in a manner that facilitates highly efficient parallel implementation using image processing circuitry on one or more processing devices. The disclosed embodiments can be configured to utilize GMMs or a wide variety of other classification techniques.
It should again be emphasized that the embodiments of the invention as described herein are intended to be illustrative only. For example, other embodiments of the invention can be implemented utilizing a wide variety of different types and arrangements of image processing circuitry, modules and processing operations than those utilized in the particular embodiments described herein. In addition, the particular assumptions made herein in the context of describing certain embodiments need not apply in other embodiments. These and numerous other alternative embodiments within the scope of the following claims will be readily apparent to those skilled in the art.

Claims

Claims What is claimed is:
1. A method comprising:
identifying a plurality of candidate boundaries in an image;
obtaining corresponding modified images for respective ones of the candidate boundaries;
applying a mapping function to each of the modified images to generate a corresponding vector;
determining sets of estimates for respective ones of the vectors relative to designated class parameters; and
selecting a particular one of the candidate boundaries based on the sets of estimates;
wherein said identifying, obtaining, applying, determining and selecting are implemented in at least one processing device comprising a processor coupled to a memory.
2. The method of claim 1 wherein identifying a plurality of candidate boundaries comprises identifying a plurality of candidate palm boundaries associated with a hand in the image.
3. The method of claim 1 further comprising:
receiving an input image; and
performing one or more normalization operations on the input image to obtain a normalized image in which the candidate boundaries are identified.
4. The method of claim 3 wherein said one or more normalization operations comprise at least one of an orientation normalization and a scale normalization.
5. The method of claim 4 wherein the orientation normalization comprises:
determining a main direction of a hand within the input image; and rotating the input image by an amount based on the determined main direction of the hand.
6. The method of claim 1 further comprising selecting a particular one of a plurality of classes to recognize a corresponding gesture based on the sets of estimates.
7. The method of claim 1 wherein identifying a plurality of candidate boundaries in the image further comprises determining at least a subset of said boundaries based on one or more of fixed, increasing, decreasing or random step sizes between adjacent candidate boundaries.
8. The method of claim 1 wherein at least a subset of the candidate boundaries comprise candidate palm boundaries oriented in a direction perpendicular to a main direction of a hand in the image.
9. The method of claim 3 wherein each of the modified images comprises first and second portions on opposite sides of its candidate boundary with the first portion of the modified image comprising pixels having values that are the same as those of respective corresponding pixels in a first portion of the normalized image and the second portion of the modified image comprising pixels having values that are different than the values of respective corresponding pixels in a second portion of the normalized image.
10. The method of claim 9 wherein each of the pixels in the second portion of each modified image has the same predetermined value.
1 1. The method of claim 1 wherein the designated class parameters include sets of class parameters for respective ones of a plurality of classes each corresponding to a different gesture.
12. The method of claim 1 1 wherein a given one of the sets of class parameters for a particular class c comprises a set of class parameters Tc = {w- ,μ) , Ω^'}^, based on a Gaussian Mixture Model having M clusters, where w, denotes a weight of an i-th one of the M clusters, and where μ, and Ω,- denote a mean vector and a covariance matrix, respectively, of a multivariate normal distribution of the i-t cluster.
13. The method of claim 11 wherein a given one of the sets of class parameters for a particular class is generated by applying the mapping function to each of a plurality of training images of the gesture associated with that class to generate a corresponding plurality of vectors and utilizing those vectors to construct a classification model having the given set of class parameters.
14. The method of claim 1 wherein determining sets of estimates for respective ones of the vectors comprises generating a given set of probabilistic estimates ρ(χ' | Γ.) for a particular one of the vectors x' relative to sets of class parameters 7} where index / takes on integer values between 1 and S where S denotes the number of candidate boundaries and where index j takes on integer values between 1 and K where K denotes a total number of classes each corresponding to a different gesture.
15. The method of claim 1 wherein determining sets of estimates for respective ones of the vectors comprises generating a given set of negative log-likelihood estimates - log p(x' \ Tj) for a particular one of the vectors x' relative to sets of class parameters 7} where index t takes on integer values between 1 and S where S denotes the number of candidate boundaries and where index j takes on integer values between 1 and K where K denotes a total number of classes each corresponding to a different gesture.
16. A computer-readable storage medium having computer program code embodied therein, wherein the computer program code when executed in the processing device causes the processing device to perform the method of claim 1.
17. An apparatus comprising:
at least one processing device comprising a processor coupled to a memory; wherein said at least one processing device is configured to identify a plurality of candidate boundaries in an image, to obtain corresponding modified images for respective ones of the candidate boundaries, to apply a mapping function to each of the modified images to generate a corresponding vector, to determine sets of estimates for respective ones of the vectors relative to designated class parameters, and to select a particular one of the candidate boundaries based on the sets of estimates.
18. The apparatus of claim 17 wherein the processing device comprises an image processor, the image processor comprising:
a preprocessing module;
a boundary detection module; and
a recognition module configured to select a particular one of a plurality of classes to recognize a corresponding gesture based on the sets of estimates; wherein said modules are implemented using image processing circuitry comprising at least one graphics processor of the image processor.
19. An integrated circuit comprising the apparatus of claim 17.
20. An image processing system comprising the apparatus of claim 17.
PCT/US2014/031471 2013-07-22 2014-03-21 Gesture recognition method and apparatus based on analysis of multiple candidate boundaries WO2015012896A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
RU2013134325/08A RU2013134325A (en) 2013-07-22 2013-07-22 DEVICE AND METHOD FOR RECOGNITION OF GESTURES ON THE BASIS OF ANALYSIS OF MANY POSSIBLE SECTION BORDERS
RU2013134325 2013-07-22
US14/168,391 US20150023607A1 (en) 2013-07-22 2014-01-30 Gesture recognition method and apparatus based on analysis of multiple candidate boundaries
US14/168,391 2014-01-30

Publications (1)

Publication Number Publication Date
WO2015012896A1 true WO2015012896A1 (en) 2015-01-29

Family

ID=52343631

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/031471 WO2015012896A1 (en) 2013-07-22 2014-03-21 Gesture recognition method and apparatus based on analysis of multiple candidate boundaries

Country Status (3)

Country Link
US (1) US20150023607A1 (en)
RU (1) RU2013134325A (en)
WO (1) WO2015012896A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10261584B2 (en) 2015-08-24 2019-04-16 Rambus Inc. Touchless user interface for handheld and wearable computers
US11841920B1 (en) 2016-02-17 2023-12-12 Ultrahaptics IP Two Limited Machine learning based gesture recognition
US11854308B1 (en) 2016-02-17 2023-12-26 Ultrahaptics IP Two Limited Hand initialization for machine learning based gesture recognition
US11714880B1 (en) * 2016-02-17 2023-08-01 Ultrahaptics IP Two Limited Hand pose estimation for machine learning based gesture recognition
US10592776B2 (en) * 2017-02-08 2020-03-17 Adobe Inc. Generating multimodal image edits for a digital image
CN109325972B (en) * 2018-07-25 2020-10-27 深圳市商汤科技有限公司 Laser radar sparse depth map processing method, device, equipment and medium
CN109977826B (en) * 2019-03-15 2021-11-02 百度在线网络技术(北京)有限公司 Object class identification method and device
JP7207210B2 (en) * 2019-07-12 2023-01-18 日本電信電話株式会社 Action recognition device, action recognition method, and action recognition program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080244465A1 (en) * 2006-09-28 2008-10-02 Wang Kongqiao Command input by hand gestures captured from camera
US20090316988A1 (en) * 2008-06-18 2009-12-24 Samsung Electronics Co., Ltd. System and method for class-specific object segmentation of image data
US20120007821A1 (en) * 2010-07-11 2012-01-12 Lester F. Ludwig Sequential classification recognition of gesture primitives and window-based parameter smoothing for high dimensional touchpad (hdtp) user interfaces
US20130120595A1 (en) * 2008-01-18 2013-05-16 Mitek Systems Systems for Mobile Image Capture and Remittance Processing of Documents on a Mobile Device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI489317B (en) * 2009-12-10 2015-06-21 Tatung Co Method and system for operating electric apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080244465A1 (en) * 2006-09-28 2008-10-02 Wang Kongqiao Command input by hand gestures captured from camera
US20130120595A1 (en) * 2008-01-18 2013-05-16 Mitek Systems Systems for Mobile Image Capture and Remittance Processing of Documents on a Mobile Device
US20090316988A1 (en) * 2008-06-18 2009-12-24 Samsung Electronics Co., Ltd. System and method for class-specific object segmentation of image data
US20120007821A1 (en) * 2010-07-11 2012-01-12 Lester F. Ludwig Sequential classification recognition of gesture primitives and window-based parameter smoothing for high dimensional touchpad (hdtp) user interfaces

Also Published As

Publication number Publication date
RU2013134325A (en) 2015-01-27
US20150023607A1 (en) 2015-01-22

Similar Documents

Publication Publication Date Title
US20150023607A1 (en) Gesture recognition method and apparatus based on analysis of multiple candidate boundaries
US11380114B2 (en) Target detection method and apparatus
US11132575B2 (en) Combinatorial shape regression for face alignment in images
US11163978B2 (en) Method and device for face image processing, storage medium, and electronic device
US20190073553A1 (en) Region proposal for image regions that include objects of interest using feature maps from multiple layers of a convolutional neural network model
WO2019145578A1 (en) Neural network image processing apparatus
US20150253864A1 (en) Image Processor Comprising Gesture Recognition System with Finger Detection and Tracking Functionality
CN103996052B (en) Three-dimensional face gender classification method based on three-dimensional point cloud
WO2020102021A2 (en) Determining associations between objects and persons using machine learning models
US20150278589A1 (en) Image Processor with Static Hand Pose Recognition Utilizing Contour Triangulation and Flattening
US20160026857A1 (en) Image processor comprising gesture recognition system with static hand pose recognition based on dynamic warping
CN110084299B (en) Target detection method and device based on multi-head fusion attention
US9619753B2 (en) Data analysis system and method
KR20230107415A (en) Method for identifying an object within an image and mobile device for executing the method
WO2010135617A1 (en) Gesture recognition systems and related methods
US20150269425A1 (en) Dynamic hand gesture recognition with selective enabling based on detected hand velocity
US8538079B2 (en) Apparatus capable of detecting location of object contained in image data and detection method thereof
US20150310264A1 (en) Dynamic Gesture Recognition Using Features Extracted from Multiple Intervals
KR101117549B1 (en) Face recognition system and method thereof
US8687898B2 (en) System and method for object recognition based on three-dimensional adaptive feature detectors
US20150161437A1 (en) Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition
US20150262362A1 (en) Image Processor Comprising Gesture Recognition System with Hand Pose Matching Based on Contour Features
US20210097290A1 (en) Video retrieval in feature descriptor domain in an artificial intelligence semiconductor solution
Joo et al. Real-time depth-based hand detection and tracking
CN117581275A (en) Eye gaze classification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14829694

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14829694

Country of ref document: EP

Kind code of ref document: A1