WO2015065520A1 - Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition - Google Patents

Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition Download PDF

Info

Publication number
WO2015065520A1
WO2015065520A1 PCT/US2014/036339 US2014036339W WO2015065520A1 WO 2015065520 A1 WO2015065520 A1 WO 2015065520A1 US 2014036339 W US2014036339 W US 2014036339W WO 2015065520 A1 WO2015065520 A1 WO 2015065520A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
interest
image
hand region
main direction
Prior art date
Application number
PCT/US2014/036339
Other languages
French (fr)
Inventor
Ivan L. MAZURENKO
Dmitry N. BABIN
Alexander A. PETYUSHKO
Denis V. PARFENOV
Pavel A. ALISEYCHIK
Alexander B. KHOLODENKO
Original Assignee
Lsi Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lsi Corporation filed Critical Lsi Corporation
Priority to US14/358,320 priority Critical patent/US20150161437A1/en
Publication of WO2015065520A1 publication Critical patent/WO2015065520A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the field relates generally to image processing, and more particularly to image processing for recognition of gestures.
  • Image processing is important in a wide variety of different applications, and such processing may involve two-dimensional (2D) images, three-dimensional (3D) images, or combinations of multiple images of different types.
  • a 3D image of a spatial scene may be generated in an image processor using triangulation based on multiple 2D images captured by respective cameras arranged such that each camera has a different view of the scene.
  • a 3D image can be generated directly using a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera.
  • SL structured light
  • ToF time of flight
  • raw image data from an image sensor is usually subject to various preprocessing operations.
  • the preprocessed image data is then subject to additional processing used to recognize gestures in the context of particular gesture recognition applications.
  • Such applications may be implemented, for example, in video gaming systems, kiosks or other systems providing a gesture-based user interface.
  • These other systems include various electronic consumer devices such as laptop computers, tablet computers, desktop computers, mobile phones and television sets.
  • an image processing system comprises an image processor having image processing circuitry and an associated memory.
  • the image processor is configured to implement a gesture recognition system comprising a static pose recognition module.
  • the static pose recognition module is configured to identify a hand region of interest in at least one image, to perform a skeletonization operation on the hand region of interest, to determine a main direction of the hand region of interest utilizing a result of the skeletonization operation, to perform a scanning operation on the hand region of interest utilizing the determined main direction to estimate a plurality of hand features that are substantially invariant to hand orientation, and to recognize a static pose of the hand region of interest based on the estimated hand features.
  • performing a scanning operation utilizing the determined main direction may comprise determining a plurality of lines perpendicular to a line of the main direction, and scanning the hand region of interest along the perpendicular lines.
  • FIG. 1 is a block diagram of an image processing system comprising an image processor implementing a static pose recognition module in an illustrative embodiment.
  • FIG. 2 is a flow diagram of an exemplary static pose recognition process performed by the static pose recognition module in the image processor of FIG. 1.
  • FIG. 3 is a flow diagram showing a more detailed view of a process for determining a main direction of a hand region of interest in one of the steps of the FIG. 2 process.
  • FIGS. 4, 5 and 6 illustrate the estimation of hand features utilizing the main direction determined by the process of FIG. 3. Detailed Description
  • Embodiments of the invention will be illustrated herein in conjunction with exemplary image processing systems that include image processors or other types of processing devices configured to perform gesture recognition. It should be understood, however, that embodiments of the invention are more generally applicable to any image processing system or associated device or technique that involves recognizing static poses in one or more images.
  • FIG. 1 shows an image processing system 100 in an embodiment of the invention.
  • the image processing system 100 comprises an image processor 102 that is configured for communication over a network 104 with a plurality of processing devices 106-1 , 106-2, . . . 106-M.
  • the image processor 102 implements a recognition subsystem 108 within a gesture recognition (GR) system 1 10.
  • the GR system 1 10 in this embodiment processes input images 11 1 from one or more image sources and provides corresponding GR-based output 112.
  • the GR-based output 1 12 may be supplied to one or more of the processing devices 106 or to other system components not specifically illustrated in this diagram.
  • the recognition subsystem 108 of GR system 1 10 more particularly comprises a static pose recognition module 114 and one or more other recognition modules 1 15.
  • the other recognition modules may comprise, for example, respective recognition modules configured to recognize cursor gestures and dynamic gestures.
  • the operation of illustrative embodiments of the GR system 1 10 of image processor 102 will be described in greater detail below
  • the recognition subsystem 108 receives inputs from additional subsystems 1 16, which may comprise one or more image processing subsystems configured to implement functional blocks associated with gesture recognition in the GR system 1 10, such as, for example, functional blocks for input frame acquisition, noise reduction, background estimation and removal, or other types of preprocessing.
  • additional subsystems 1 16 may comprise one or more image processing subsystems configured to implement functional blocks associated with gesture recognition in the GR system 1 10, such as, for example, functional blocks for input frame acquisition, noise reduction, background estimation and removal, or other types of preprocessing.
  • the background estimation and removal block is implemented as a separate subsystem that is applied to an input image after a preprocessing block is applied to the image.
  • Exemplary background estimation and removal techniques suitable for use in the GR system 1 10 are described in Russian Patent Application No. 2013135506, filed July 29, 2013 and entitled "Image Processor Configured for Efficient Estimation and Elimination of Background Information in Images,” which is commonly assigned herewith and incorporated by reference herein.
  • the recognition subsystem 108 generates GR events for consumption by one or more of a set of GR applications 118.
  • the GR events may comprise information indicative of recognition of one or more particular gestures within one or more frames of the input images 111 , such that a given GR application in the set of GR applications 1 18 can translate that information into a particular command or set of commands to be executed by that application.
  • the recognition subsystem 108 recognizes within the image a gesture from a specified gesture vocabulary and generates a corresponding gesture pattern identifier (ID) and possibly additional related parameters for delivery to one or more of the applications 1 18.
  • ID gesture pattern identifier
  • the GR system 110 may provide GR events or other information, possibly generated by one or more of the GR applications 118, as GR-based output 112. Such output may be provided to one or more of the processing devices 106. In other embodiments, at least a portion of set of GR applications 118 is implemented at least in part on one or more of the processing devices 106.
  • Portions of the GR system 1 10 may be implemented using separate processing layers of the image processor 102. These processing layers comprise at least a portion of what is more generally referred to herein as "image processing circuitry" of the image processor 102.
  • the image processor 102 may comprise a preprocessing layer implementing a preprocessing module and a plurality of higher processing layers for performing other functions associated with recognition of gestures within frames of an input image stream comprising the input images 1 1 1.
  • Such processing layers may also be implemented in the form of respective subsystems of the GR system 1 10.
  • embodiments of the invention are not limited to recognition of static or dynamic hand gestures, but can instead be adapted for use in a wide variety of other machine vision applications involving gesture recognition, and may comprise different numbers, types and arrangements of modules, subsystems, processing layers and associated functional blocks.
  • processing operations associated with the image processor 102 in the present embodiment may instead be implemented at least in part on other devices in other embodiments.
  • preprocessing operations may be implemented at least in part in an image source comprising a depth imager or other type of imager that provides at least a portion of the input images 1 1 1.
  • one or more of the applications 1 18 may be implemented on a different processing device than the subsystems 108 and 1 16, such as one of the processing devices 106.
  • image processor 102 may itself comprise multiple distinct processing devices, such that different portions of the GR system 1 10 are implemented using two or more processing devices.
  • image processor as used herein is intended to be broadly construed so as to encompass these and other arrangements.
  • the GR system 110 performs preprocessing operations on received input images 1 1 1 from one or more image sources.
  • This received image data in the present embodiment is assumed to comprise raw image data received from a depth sensor, but other types of received image data may be processed in other embodiments.
  • Such preprocessing operations may include noise reduction and background removal.
  • the raw image data received by the GR system 1 10 from the depth sensor may include a stream of frames comprising respective depth images, with each such depth image comprising a plurality of depth image pixels.
  • a given depth image D may be provided to the GR system 1 10 in the form of a matrix of real values.
  • a given such depth image is also referred to herein as a depth map.
  • image is intended to be broadly construed.
  • the image processor 102 may interface with a variety of different image sources and image destinations.
  • the image processor 102 may receive input images 11 1 from one or more image sources and provide processed images as part of GR-based output 1 12 to one or more image destinations. At least a subset of such image sources and image destinations may be implemented as least in part utilizing one or more of the processing devices 106.
  • At least a subset of the input images 1 11 may be provided to the image processor 102 over network 104 for processing from one or more of the processing devices 106.
  • processed images or other related GR-based output 1 12 may be delivered by the image processor 102 over network 104 to one or more of the processing devices 106.
  • processing devices may therefore be viewed as examples of image sources or image destinations as those terms are used herein.
  • a given image source may comprise, for example, a 3D imager such as an SL camera or a ToF camera configured to generate depth images, or a 2D imager configured to generate grayscale images, color images, infrared images or other types of 2D images. It is also possible that a single imager or other image source can provide both a depth image and a corresponding 2D image such as a grayscale image, a color image or an infrared image. For example, certain types of existing 3D cameras are able to produce a depth map of a given scene as well as a 2D image of the same scene. Alternatively, a 3D imager providing a depth map of a given scene can be arranged in proximity to a separate high-resolution video camera or other 2D imager providing a 2D image of substantially the same scene.
  • An image source is a storage device or server that provides images to the image processor 102 for processing.
  • a given image destination may comprise, for example, one or more display screens of a human-machine interface of a computer or mobile phone, or at least one storage device or server that receives processed images from the image processor 102.
  • the image processor 102 may be at least partially combined with at least a subset of the one or more image sources and the one or more image destinations on a common processing device.
  • a given image source and the image processor 102 may be collectively implemented on the same processing device.
  • a given image destination and the image processor 102 may be collectively implemented on the same processing device.
  • the image processor 102 is configured to recognize hand gestures, although the disclosed techniques can be adapted in a straightforward manner for use with other types of gesture recognition processes.
  • the input images 1 1 1 may comprise respective depth images generated by a depth imager such as an SL camera or a ToF camera.
  • a depth imager such as an SL camera or a ToF camera.
  • Other types and arrangements of images may be received, processed and generated in other embodiments, including 2D images or combinations of 2D and 3D images.
  • image processor 102 in the FIG. 1 embodiment can be varied in other embodiments.
  • an otherwise conventional image processing integrated circuit or other type of image processing circuitry suitably modified to perform processing operations as disclosed herein may be used to implement at least a portion of one or more of the components 1 14, 1 15, 1 16 and 1 18 of image processor 102.
  • image processing circuitry that may be used in one or more embodiments of the invention is an otherwise conventional graphics processor suitably reconfigured to perform functionality associated with one or more of the components 1 14, 1 15, 1 16 and 1 18.
  • the processing devices 106 may comprise, for example, computers, mobile phones, servers or storage devices, in any combination. One or more such devices also may include, for example, display screens or other user interfaces that are utilized to present images generated by the image processor 102.
  • the processing devices 106 may therefore comprise a wide variety of different destination devices that receive processed image streams or other types of G -based output 112 from the image processor 102 over the network 104, including by way of example at least one server or storage device that receives one or more processed image streams from the image processor 102.
  • the image processor 102 may be at least partially combined with one or more of the processing devices 106.
  • the image processor 102 may be implemented at least in part using a given one of the processing devices 106.
  • a computer or mobile phone may be configured to incorporate the image processor 102 and possibly a given image source.
  • Image sources utilized to provide input images 111 in the image processing system 100 may therefore comprise cameras or other imagers associated with a computer, mobile phone or other processing device.
  • the image processor 102 may be at least partially combined with one or more image sources or image destinations on a common processing device.
  • the image processor 102 in the present embodiment is assumed to be implemented using at least one processing device and comprises a processor 120 coupled to a memory 122.
  • the processor 120 executes software code stored in the memory 122 in order to control the performance of image processing operations.
  • the image processor 102 also comprises a network interface 124 that supports communication over network 104.
  • the network interface 124 may comprise one or more conventional transceivers. In other embodiments, the image processor 102 need not be configured for communication with other devices over a network, and in such embodiments the network interface 124 may be eliminated.
  • the processor 120 may comprise, for example, a microprocessor, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor (DSP), or other similar processing device component, as well as other types and arrangements of image processing circuitry, in any combination.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • CPU central processing unit
  • ALU arithmetic logic unit
  • DSP digital signal processor
  • the memory 122 stores software code for execution by the processor 120 in implementing portions of the functionality of image processor 102, such as the subsystems 108 and 1 16 and the GR applications 1 18.
  • a given such memory that stores software code for execution by a corresponding processor is an example of what is more generally referred to herein as a computer-readable medium or other type of computer program product having computer program code embodied therein, and may comprise, for example, electronic memory such as random access memory (RAM) or read-only memory (ROM), magnetic memory, optical memory, or other types of storage devices in any combination.
  • the processor may comprise portions or combinations of a microprocessor, ASIC, FPGA, CPU, ALU, DSP or other image processing circuitry.
  • embodiments of the invention may be implemented in the form of integrated circuits.
  • identical die are typically formed in a repeated pattern on a surface of a semiconductor wafer.
  • Each die includes an image processor or other image processing circuitry as described herein, and may include other structures or circuits.
  • the individual die are cut or diced from the wafer, then packaged as an integrated circuit.
  • One skilled in the art would know how to dice wafers and package die to produce integrated circuits. Integrated circuits so manufactured are considered embodiments of the invention.
  • image processing system 100 as shown in FIG. 1 is exemplary only, and the system 100 in other embodiments may include other elements in addition to or in place of those specifically shown, including one or more elements of a type commonly found in a conventional implementation of such a system.
  • the image processing system 100 is implemented as a video gaming system or other type of gesture-based system that processes image streams in order to recognize user gestures.
  • the disclosed techniques can be similarly adapted for use in a wide variety of other systems requiring a gesture-based human-machine interface, and can also be applied to other applications, such as machine vision systems in robotics and other industrial applications that utilize gesture recognition.
  • embodiments of the invention are not limited to use in recognition of hand gestures, but can be applied to other types of gestures as well.
  • the term "gesture” as used herein is therefore intended to be broadly construed.
  • the input images 1 1 1 received in the image processor 102 from an image source comprise input depth images each referred to as an input frame.
  • this source may comprise a depth imager such as an SL or ToF camera comprising a depth image sensor.
  • Other types of image sensors including, for example, grayscale image sensors, color image sensors or infrared image sensors, may be used in other embodiments.
  • a given image sensor typically provides image data in the form of one or more rectangular matrices of real or integer numbers corresponding to respective input image pixels. These matrices can contain per-pixel information such as depth values and corresponding amplitude or intensity values. Other per-pixel information such as color, phase and validity may additionally or alternatively be provided. Referring now to FIG.
  • a process 200 performed by the static pose recognition module 1 14 in an illustrative embodiment is shown.
  • the process is assumed to be applied to preprocessed image frames received from a preprocessing subsystem of the set of additional subsystems 1 16.
  • the preprocessing subsystem performs noise reduction and background estimation and removal, using techniques such as those identified above.
  • the image frames are received by the preprocessing system as raw image data from an image sensor of a depth imager such as a ToF camera or other type of ToF imager.
  • the image sensor in this embodiment is assumed to comprise a variable frame rate image sensor, such as a ToF image sensor configured to operate at a variable frame rate.
  • the static pose recognition module 1 14 can operate at a lower frame rate than other recognition modules 1 15, such as recognition modules configured to recognize cursor gestures and dynamic gestures.
  • Other types of sources supporting variable or fixed frame rates can be used in other embodiments.
  • the process 200 includes the following steps:
  • Step 1 Find hand ROI
  • This step in the present embodiment more particularly involves defining an ROI mask for a hand in the input image.
  • the ROI mask is implemented as a binary mask in the form of an image, also referred to herein as a "hand image,” in which pixels within the ROI are have a certain binary value, illustratively a logic 1 value, and pixels outside the ROI have the complementary binary value, illustratively a logic 0 value.
  • the ROI corresponds to a hand within the input image, and is therefore also referred to herein as a hand ROI.
  • An example of an ROI mask comprising a hand ROI can be seen in FIGS. 4 through 6 in the context of estimation of hand features. With reference to FIG.
  • the ROI mask is shown with 1 -valued or "white” pixels identifying those pixels within the ROI, and 0-valued or "black” pixels identifying those pixels outside of the ROI.
  • the hand ROI in example of FIGS. 4, 5 and 6 is in the form of a particular type of static hand pose, namely, a "fingergun" static hand pose. This is one of multiple static hand poses that may be recognized using the process 200.
  • the input image in which the hand ROI is identified in Step 1 is assumed to be supplied by a ToF imager.
  • a ToF imager typically comprises a light emitting diode (LED) light source that illuminates an imaged scene.
  • Distance is measured based on the time difference between the emission of light onto the scene from the LED source and the receipt at the image sensor of corresponding light reflected back from objects in the scene. Using the speed of light, one can calculate the distance to a given point on an imaged object for a particular pixel as a function of the time difference between emitting the incident light and receiving the reflected light.
  • LED light emitting diode
  • distance d to the given point can be computed as follows: where T is the time difference between emitting the incident light and receiving the reflected light, c is the speed of light, and the constant factor 2 is due to the fact that the light passes through the distance twice, as incident light from the light source to the object and as reflected light from the object back to the image sensor. This distance is more generally referred to herein as a depth value.
  • the time difference between emitting and receiving light may be measured, for example, by using a periodic light signal, such as a sinusoidal light signal or a triangle wave light signal, and measuring the phase shift between the emitted periodic light signal and the reflected periodic signal received back at the image sensor.
  • a periodic light signal such as a sinusoidal light signal or a triangle wave light signal
  • the ToF imager can be configured, for example, to calculate a correlation function c(r) between input reflected signal s(f) and output emitted signal g(t) shifted by predefined value ⁇ , in accordance with the following equation:
  • the phase images in this embodiment comprise respective sets of Ao, A ⁇ , A2 and A3 correlation values computed for a set of image pixels.
  • the resulting raw image data is transferred from the image sensor to internal memory of the image processor 102 for preprocessing in the manner previously described.
  • the hand ROI can be identified in the preprocessed image using any of a variety of techniques. For example, it is possible to utilize the techniques disclosed in the above-cited Russian Patent Application No. 2013135506 to determine the hand ROI. Accordingly, the first step of the process 200 may be implemented in a preprocessing block of the GR system 110 rather than in the static pose recognition module 114.
  • the hand ROI can be determined using threshold logic applied to depth and amplitude values of the image. This can be more particularly implemented as follows:
  • amplitude values are known for respective pixels of the image, one can select only those pixels with amplitude values greater than some predefined threshold. This approach is applicable not only for images from ToF imagers, but also for images from other types of imagers, such as infrared imagers with active lighting. For both ToF imagers and infrared imagers with active lighting, the closer an object is to the imager, the higher the amplitude values of the corresponding image pixels, not taking into account reflecting materials. Accordingly, selecting only pixels with relatively high amplitude values allows one to preserve close objects from an imaged scene and to eliminate far objects from the imaged scene. It should be noted that for ToF imagers, pixels with lower amplitude values tend to have higher error in their corresponding depth values, and so removing pixels with low amplitude values additionally protects one from using incorrect depth information.
  • depth values are known for respective pixels of the image, one can select only those pixels with depth values falling between predefined minimum and maximum threshold depths Dmin and Dmax. These thresholds are set to appropriate distances between which the hand is expected to be located within the image.
  • Opening or closing morphological operations utilizing erosion and dilation operators can be applied to remove dots and holes as well as other spatial noise in the image.
  • the output of the above-described ROI determination process is a binary ROI mask for the hand in the image. It can be in the form of an image having the same size as the input image, or a sub-image containing only those pixels that are part of the ROI. For further description below, it is assumed that the ROI mask is an image having the same size as the input image. As mentioned previously, the ROI mask is also referred to herein as a "hand image” and the ROI itself within the ROI mask is referred to as a "hand ROI.”
  • the output may include additional information such as an average of the depth values for the pixels in the ROI. This average of depth values for the ROI pixels is denoted elsewhere herein as meanZ.
  • Step 2 Find hand skeleton
  • Technique A is less computationally complex but also less precise than the second exemplary technique, denoted Technique B.
  • the hand skeleton comprises the set of stored points for the respective rows.
  • Step 3 Find hand main direction
  • Exemplary techniques for finding the hand main direction described below include one of substeps la and l b, each possibly combined with an optional substep 2.
  • Other techniques can be used for solving the system of equations.
  • PCA Principal Component Analysis
  • angle - arctg(a), where arctg denotes "arctangent.”
  • This angle is an example of what is more generally referred to herein as a "main direction" of a hand. Accordingly, hand main direction can be characterized by the prediction line itself, by an angle made by the prediction line relative to the vertical axis, or by other information based on the prediction line.
  • FIG. 3 illustrates an exemplary process of determining a main direction of a hand using the above-described substeps.
  • the process starts with a skeleton 300 and includes steps 302 through 310.
  • any outliers are determined, as those points of the skeleton having a distance from the prediction line that is greater than greater than k5.
  • step 308 If the number of outliers is determined to be greater than zero in step 308, the outliers are excluded from the skeleton in step 310, and otherwise the process ends by outputting the angle and the prediction line parameters a and b as indicated in step 312. From step 310, a feedback line 314 returns the process to step 302 to recompute the prediction line with the outliers excluded from the skeleton as described in substep 2 above. Each time the process is repeated, additional outliers are excluded via step 310 and the prediction line is recomputed in step 302 using the resulting reduced set of skeleton points.
  • the feedback may be limited to a specified maximum number of passes through the process.
  • This step in the present embodiment more particularly involves defining the palm boundary and removing from the ROI any pixels below the palm boundary, leaving essentially only the palm and fingers in a modified hand image.
  • Such a step advantageously eliminates, for example, any portions of the arm from the wrist to the elbow, as these portions can be highly variable due to the presence of items such as sleeves, wristwatches and bracelets, and in any event are typically not useful for static hand pose recognition.
  • the palm boundary may be determined by taking into account that the typical length of the human hand is about 20-25 centimeters (cm), and removing from the ROI all pixels located farther than a 25 cm threshold distance from the uppermost fingertip along the previously-determined main direction of the hand.
  • the uppermost fingertip can be identified as the uppermost point of the hand skeleton or as the uppermost 1 value in the binary ROI mask.
  • the 25 cm threshold can be converted to a particular number of image pixels by using an average depth value determined for the pixels in the ROI as mentioned in conjunction with the description of Step 1 above. Step 5.
  • This step in the present embodiment more particularly involves scanning the modified hand image resulting from Step 4.
  • the scanning is performed line-by-line over lines that are perpendicular to the main direction line previously determined in Step 3.
  • the ROI mask is effectively modified so as to correspond to a vertically-oriented hand. This can be achieved by rotating the existing ROI mask by an angle a, where a is the angle between the main direction and the vertical axis as determined in Step 3, but such rotation is not computationally efficient for binary masks. Instead, perpendiculars to the main direction line are determined, and the hand image is scanned line-by-line along such perpendiculars.
  • the latter approach may be considered a type of "virtual" rotation of the ROI mask, as opposed to a "real" rotation of the ROI mask by the angle a.
  • Technique A is less computationally complex but also less precise than the second exemplary technique, denoted Technique B.
  • This technique uses the angle a to calculate the perpendicular to the main direction, but scans using precise steps that are equal to 1 pixel both for movement along a given perpendicular to the main direction line and for movement along the main direction line from perpendicular to perpendicular.
  • the coordinates can be rounded to nearest integer values or various types of interpolation (e.g., bilinear, bi-cubic, etc.) can be applied.
  • This modified ROI mask is also referred to herein as a vertically-oriented ROI mask.
  • a vertically-oriented ROI mask it is possible to obtain the modified ROI mask by performing a real rotation of the hand ROI by the angle a, although such a rotation would typically be less efficient than the exemplary virtual rotation techniques described above.
  • This step generally involves estimating hand features using the vertically-oriented ROI mask described above.
  • the estimated hand features after any needed normalization in Step 7, are provided as input to classifiers configured to recognize particular static poses in Step 8.
  • the estimation of the hand features can be performed as part of the image scanning of Step 5, in which case both Step 5 and Step 6 can be performed as a single combined step of the process 200. At least portions of Step 7 may also be implemented in such a combined step.
  • the hand features determined using the vertically-oriented ROI mask in Step 6 include at least a subset of the following features:
  • Width of the hand given by the difference between the column numbers of the leftmost and the rightmost ROI pixels.
  • top finger area is defined as the number of ROI pixels that are not farther than ht op cm from the uppermost ROI pixel.
  • the top finger area used in this feature is illustrated in FIG. 4 as the darkened portion of the tip of the pointing finger.
  • the line 400 indicates the main direction line of the hand in the ROI mask.
  • the side finger area is defined as the minimum of the number of ROI pixels that are not farther than hieft cm from the leftmost ROI pixel and the number of ROI pixels that are not farther than h r i ht cm from rightmost ROI pixel.
  • the side finger area computation is performed by minimization element 402 in FIG. 4 using the darkened areas shown at left and right sides of the ROI mask.
  • Degree of non-convexity given by the square root of the number of pixels with value 0 that are bordered by at least two ROI pixels with value 1 as determined while scanning the hand image along perpendiculars to the main direction as per Step 5. This is illustrated in FIG. 5, which shows a set of mask scanning lines 500 corresponding to respective perpendiculars of the main direction line 400 of the ROI mask.
  • the identified 0-valued pixels are in two regions of the image, one in the trough between the thumb and forefinger and the other between a pair of knuckles of the hand, and the numbers of pixels in these two regions are combined by a summing element 502.
  • the output of the summing element 502 is subject to a square root operation not specifically illustrated in the figure in order to generate the feature.
  • the degree of non-convexity is equal to zero for all convex ROIs.
  • the above-described hand features can all be calculated at relatively low complexity using one or at most two scanning passes through the ROI mask.
  • hand features are exemplary only, and additional or alternative hand features may be utilized to facilitate static pose recognition in other embodiments.
  • various functions of one or more of the above-described hand features or other related hand features may be used as additional or alternative hand features.
  • functions other than square root may be used in conjunction with hand area, top finger area, side finger area or other features.
  • techniques other than those described above may be used to compute the features.
  • the particular number of features utilized in a given embodiment will typically depend on factors such as the number of different hand pose classes to be recognized, the shape of an average hand inside each class, and the recognition quality requirements. Techniques such as Monte-Carlo simulations or genetic search algorithms can be utilized to determine an optimal subset of the features for given levels of computational complexity and recognition quality.
  • a pointing gesture detector having only three distinct classes, corresponding to pointing forefinger, pointing forefinger with open thumb (“fingergun”), and all other static hand poses, respectively, can achieve an approximately 0.995 recognition rate using the subset of features 1 , 2, 3, 6, 7 and 8.
  • the additional feature normalization can then be implemented as follows. If the average depth value for the ROI pixels is not available, linear features such as width, height and perimeter are normalized by dividing each such linear feature by the square root of the hand area, while second order features such as moments are normalized by dividing each such second order feature by the hand area itself. If the average depth value for the ROI pixels is available, linear features are instead multiplied by the average depth value and second order features are multiplied by the square of the average depth value.
  • Step 8 Recognition based on classification
  • classification techniques are applied to recognize static hand poses based on the normalized hand features from Step 7.
  • static pose classes that may be utilized in a given embodiment include finger, palm with fingers, palm without fingers, hand edge, pinch, fist, fingergun and head.
  • Each static pose class utilizes a corresponding classifier configured in accordance with a classification technique such as, for example, Gaussian Mixture Models (GMMs), Nearest Neighbor, Decision Trees, and Neural Networks. Additional details regarding the use of classifiers based on GMMs in the recognition of static hand poses can be found in the above-cited Russian Patent Application No. 2013134325.
  • GMMs Gaussian Mixture Models
  • FIGS. 2 and 3 are exemplary only, and additional or alternative blocks can be used in other embodiments.
  • blocks illustratively shown as being executed serially in the figures can be performed at least in part in parallel with one or more other blocks or in other pipelined configurations in other embodiments.
  • the illustrative embodiments provide significantly improved gesture recognition performance relative to conventional arrangements.
  • these embodiments provide computationally-efficient static pose recognition using estimated hand features that are substantially invariant to hand orientation within an image and in some cases also substantially invariant to scale and movement of the hand within an image. This avoids the need for complex hand image normalizations that would otherwise be required to deal with variations in hand orientation, scale and movement. Accordingly, the GR system performance is accelerated while ensuring high precision in the recognition process.
  • the disclosed techniques can be applied to a wide range of different GR systems, using depth, grayscale, color infrared and other types of imagers which support a variable frame rate, as well as imagers which do not support a variable frame rate.
  • Different portions of the GR system 1 10 can be implemented in software, hardware, firmware or various combinations thereof.
  • software utilizing hardware accelerators may be used for some processing blocks while other blocks are implemented using combinations of hardware and firmware.
  • At least portions of the GR-based output 1 12 of GR system 1 10 may be further processed in the image processor 102, or supplied to another processing device 106 or image destination, as mentioned previously.

Abstract

An image processing system comprises an image processor having image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system comprising a static pose recognition module. The static pose recognition module is configured to identify a hand region of interest in at least one image, to perform a skeletonization operation on the hand region of interest, to determine a main direction of the hand region of interest utilizing a result of the skeletonization operation, to perform a scanning operation on the hand region of interest utilizing the determined main direction to estimate a plurality of hand features that are substantially invariant to hand orientation, and to recognize a static pose of the hand region of interest based on the estimated hand features.

Description

IMAGE PROCESSOR COMPRISING GESTURE RECOGNITION SYSTEM
WITH COMPUTATIONALLY-EFFICIENT STATIC HAND POSE RECOGNITION
Field
The field relates generally to image processing, and more particularly to image processing for recognition of gestures.
Background
Image processing is important in a wide variety of different applications, and such processing may involve two-dimensional (2D) images, three-dimensional (3D) images, or combinations of multiple images of different types. For example, a 3D image of a spatial scene may be generated in an image processor using triangulation based on multiple 2D images captured by respective cameras arranged such that each camera has a different view of the scene. Alternatively, a 3D image can be generated directly using a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera. These and other 3D images, which are also referred to herein as depth images, are commonly utilized in machine vision applications, including those involving gesture recognition.
In a typical gesture recognition arrangement, raw image data from an image sensor is usually subject to various preprocessing operations. The preprocessed image data is then subject to additional processing used to recognize gestures in the context of particular gesture recognition applications. Such applications may be implemented, for example, in video gaming systems, kiosks or other systems providing a gesture-based user interface. These other systems include various electronic consumer devices such as laptop computers, tablet computers, desktop computers, mobile phones and television sets.
Summary
In one embodiment, an image processing system comprises an image processor having image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system comprising a static pose recognition module. The static pose recognition module is configured to identify a hand region of interest in at least one image, to perform a skeletonization operation on the hand region of interest, to determine a main direction of the hand region of interest utilizing a result of the skeletonization operation, to perform a scanning operation on the hand region of interest utilizing the determined main direction to estimate a plurality of hand features that are substantially invariant to hand orientation, and to recognize a static pose of the hand region of interest based on the estimated hand features.
By way of example only, performing a scanning operation utilizing the determined main direction may comprise determining a plurality of lines perpendicular to a line of the main direction, and scanning the hand region of interest along the perpendicular lines.
Other embodiments of the invention include but are not limited to methods, apparatus, systems, processing devices, integrated circuits, and computer-readable storage media having computer program code embodied therein. Brief Description of the Drawings
FIG. 1 is a block diagram of an image processing system comprising an image processor implementing a static pose recognition module in an illustrative embodiment.
FIG. 2 is a flow diagram of an exemplary static pose recognition process performed by the static pose recognition module in the image processor of FIG. 1.
FIG. 3 is a flow diagram showing a more detailed view of a process for determining a main direction of a hand region of interest in one of the steps of the FIG. 2 process.
FIGS. 4, 5 and 6 illustrate the estimation of hand features utilizing the main direction determined by the process of FIG. 3. Detailed Description
Embodiments of the invention will be illustrated herein in conjunction with exemplary image processing systems that include image processors or other types of processing devices configured to perform gesture recognition. It should be understood, however, that embodiments of the invention are more generally applicable to any image processing system or associated device or technique that involves recognizing static poses in one or more images.
FIG. 1 shows an image processing system 100 in an embodiment of the invention. The image processing system 100 comprises an image processor 102 that is configured for communication over a network 104 with a plurality of processing devices 106-1 , 106-2, . . . 106-M. The image processor 102 implements a recognition subsystem 108 within a gesture recognition (GR) system 1 10. The GR system 1 10 in this embodiment processes input images 11 1 from one or more image sources and provides corresponding GR-based output 112. The GR-based output 1 12 may be supplied to one or more of the processing devices 106 or to other system components not specifically illustrated in this diagram. The recognition subsystem 108 of GR system 1 10 more particularly comprises a static pose recognition module 114 and one or more other recognition modules 1 15. The other recognition modules may comprise, for example, respective recognition modules configured to recognize cursor gestures and dynamic gestures. The operation of illustrative embodiments of the GR system 1 10 of image processor 102 will be described in greater detail below in conjunction with FIGS. 2 through 6.
The recognition subsystem 108 receives inputs from additional subsystems 1 16, which may comprise one or more image processing subsystems configured to implement functional blocks associated with gesture recognition in the GR system 1 10, such as, for example, functional blocks for input frame acquisition, noise reduction, background estimation and removal, or other types of preprocessing. In some embodiments, the background estimation and removal block is implemented as a separate subsystem that is applied to an input image after a preprocessing block is applied to the image.
Exemplary noise reduction techniques suitable for use in the GR system 1 10 are described in PCT International Application PCT/US 13/56937, filed on August 28, 2013 and entitled "Image Processor With Edge-Preserving Noise Suppression Functionality," which is commonly assigned herewith and incorporated by reference herein.
Exemplary background estimation and removal techniques suitable for use in the GR system 1 10 are described in Russian Patent Application No. 2013135506, filed July 29, 2013 and entitled "Image Processor Configured for Efficient Estimation and Elimination of Background Information in Images," which is commonly assigned herewith and incorporated by reference herein.
It should be understood, however, that these particular functional blocks are exemplary only, and other embodiments of the invention can be configured using other arrangements of additional or alternative functional blocks.
In the FIG. 1 embodiment, the recognition subsystem 108 generates GR events for consumption by one or more of a set of GR applications 118. For example, the GR events may comprise information indicative of recognition of one or more particular gestures within one or more frames of the input images 111 , such that a given GR application in the set of GR applications 1 18 can translate that information into a particular command or set of commands to be executed by that application. Accordingly, the recognition subsystem 108 recognizes within the image a gesture from a specified gesture vocabulary and generates a corresponding gesture pattern identifier (ID) and possibly additional related parameters for delivery to one or more of the applications 1 18. The configuration of such information is adapted in accordance with the specific needs of the application.
Additionally or alternatively, the GR system 110 may provide GR events or other information, possibly generated by one or more of the GR applications 118, as GR-based output 112. Such output may be provided to one or more of the processing devices 106. In other embodiments, at least a portion of set of GR applications 118 is implemented at least in part on one or more of the processing devices 106.
Portions of the GR system 1 10 may be implemented using separate processing layers of the image processor 102. These processing layers comprise at least a portion of what is more generally referred to herein as "image processing circuitry" of the image processor 102. For example, the image processor 102 may comprise a preprocessing layer implementing a preprocessing module and a plurality of higher processing layers for performing other functions associated with recognition of gestures within frames of an input image stream comprising the input images 1 1 1. Such processing layers may also be implemented in the form of respective subsystems of the GR system 1 10.
It should be noted, however, that embodiments of the invention are not limited to recognition of static or dynamic hand gestures, but can instead be adapted for use in a wide variety of other machine vision applications involving gesture recognition, and may comprise different numbers, types and arrangements of modules, subsystems, processing layers and associated functional blocks.
Also, certain processing operations associated with the image processor 102 in the present embodiment may instead be implemented at least in part on other devices in other embodiments. For example, preprocessing operations may be implemented at least in part in an image source comprising a depth imager or other type of imager that provides at least a portion of the input images 1 1 1. It is also possible that one or more of the applications 1 18 may be implemented on a different processing device than the subsystems 108 and 1 16, such as one of the processing devices 106.
Moreover, it is to be appreciated that the image processor 102 may itself comprise multiple distinct processing devices, such that different portions of the GR system 1 10 are implemented using two or more processing devices. The term "image processor" as used herein is intended to be broadly construed so as to encompass these and other arrangements.
The GR system 110 performs preprocessing operations on received input images 1 1 1 from one or more image sources. This received image data in the present embodiment is assumed to comprise raw image data received from a depth sensor, but other types of received image data may be processed in other embodiments. Such preprocessing operations may include noise reduction and background removal.
The raw image data received by the GR system 1 10 from the depth sensor may include a stream of frames comprising respective depth images, with each such depth image comprising a plurality of depth image pixels. For example, a given depth image D may be provided to the GR system 1 10 in the form of a matrix of real values. A given such depth image is also referred to herein as a depth map.
A wide variety of other types of images or combinations of multiple images may be used in other embodiments. It should therefore be understood that the term "image" as used herein is intended to be broadly construed.
The image processor 102 may interface with a variety of different image sources and image destinations. For example, the image processor 102 may receive input images 11 1 from one or more image sources and provide processed images as part of GR-based output 1 12 to one or more image destinations. At least a subset of such image sources and image destinations may be implemented as least in part utilizing one or more of the processing devices 106.
Accordingly, at least a subset of the input images 1 11 may be provided to the image processor 102 over network 104 for processing from one or more of the processing devices 106. Similarly, processed images or other related GR-based output 1 12 may be delivered by the image processor 102 over network 104 to one or more of the processing devices 106. Such processing devices may therefore be viewed as examples of image sources or image destinations as those terms are used herein.
A given image source may comprise, for example, a 3D imager such as an SL camera or a ToF camera configured to generate depth images, or a 2D imager configured to generate grayscale images, color images, infrared images or other types of 2D images. It is also possible that a single imager or other image source can provide both a depth image and a corresponding 2D image such as a grayscale image, a color image or an infrared image. For example, certain types of existing 3D cameras are able to produce a depth map of a given scene as well as a 2D image of the same scene. Alternatively, a 3D imager providing a depth map of a given scene can be arranged in proximity to a separate high-resolution video camera or other 2D imager providing a 2D image of substantially the same scene.
Another example of an image source is a storage device or server that provides images to the image processor 102 for processing. A given image destination may comprise, for example, one or more display screens of a human-machine interface of a computer or mobile phone, or at least one storage device or server that receives processed images from the image processor 102.
It should also be noted that the image processor 102 may be at least partially combined with at least a subset of the one or more image sources and the one or more image destinations on a common processing device. Thus, for example, a given image source and the image processor 102 may be collectively implemented on the same processing device. Similarly, a given image destination and the image processor 102 may be collectively implemented on the same processing device.
In the present embodiment, the image processor 102 is configured to recognize hand gestures, although the disclosed techniques can be adapted in a straightforward manner for use with other types of gesture recognition processes.
As noted above, the input images 1 1 1 may comprise respective depth images generated by a depth imager such as an SL camera or a ToF camera. Other types and arrangements of images may be received, processed and generated in other embodiments, including 2D images or combinations of 2D and 3D images.
The particular arrangement of subsystems, applications and other components shown in image processor 102 in the FIG. 1 embodiment can be varied in other embodiments. For example, an otherwise conventional image processing integrated circuit or other type of image processing circuitry suitably modified to perform processing operations as disclosed herein may be used to implement at least a portion of one or more of the components 1 14, 1 15, 1 16 and 1 18 of image processor 102. One possible example of image processing circuitry that may be used in one or more embodiments of the invention is an otherwise conventional graphics processor suitably reconfigured to perform functionality associated with one or more of the components 1 14, 1 15, 1 16 and 1 18.
The processing devices 106 may comprise, for example, computers, mobile phones, servers or storage devices, in any combination. One or more such devices also may include, for example, display screens or other user interfaces that are utilized to present images generated by the image processor 102. The processing devices 106 may therefore comprise a wide variety of different destination devices that receive processed image streams or other types of G -based output 112 from the image processor 102 over the network 104, including by way of example at least one server or storage device that receives one or more processed image streams from the image processor 102. Although shown as being separate from the processing devices 106 in the present embodiment, the image processor 102 may be at least partially combined with one or more of the processing devices 106. Thus, for example, the image processor 102 may be implemented at least in part using a given one of the processing devices 106. As a more particular example, a computer or mobile phone may be configured to incorporate the image processor 102 and possibly a given image source. Image sources utilized to provide input images 111 in the image processing system 100 may therefore comprise cameras or other imagers associated with a computer, mobile phone or other processing device. As indicated previously, the image processor 102 may be at least partially combined with one or more image sources or image destinations on a common processing device.
The image processor 102 in the present embodiment is assumed to be implemented using at least one processing device and comprises a processor 120 coupled to a memory 122. The processor 120 executes software code stored in the memory 122 in order to control the performance of image processing operations. The image processor 102 also comprises a network interface 124 that supports communication over network 104. The network interface 124 may comprise one or more conventional transceivers. In other embodiments, the image processor 102 need not be configured for communication with other devices over a network, and in such embodiments the network interface 124 may be eliminated.
The processor 120 may comprise, for example, a microprocessor, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor (DSP), or other similar processing device component, as well as other types and arrangements of image processing circuitry, in any combination.
The memory 122 stores software code for execution by the processor 120 in implementing portions of the functionality of image processor 102, such as the subsystems 108 and 1 16 and the GR applications 1 18. A given such memory that stores software code for execution by a corresponding processor is an example of what is more generally referred to herein as a computer-readable medium or other type of computer program product having computer program code embodied therein, and may comprise, for example, electronic memory such as random access memory (RAM) or read-only memory (ROM), magnetic memory, optical memory, or other types of storage devices in any combination. As indicated above, the processor may comprise portions or combinations of a microprocessor, ASIC, FPGA, CPU, ALU, DSP or other image processing circuitry. It should also be appreciated that embodiments of the invention may be implemented in the form of integrated circuits. In a given such integrated circuit implementation, identical die are typically formed in a repeated pattern on a surface of a semiconductor wafer. Each die includes an image processor or other image processing circuitry as described herein, and may include other structures or circuits. The individual die are cut or diced from the wafer, then packaged as an integrated circuit. One skilled in the art would know how to dice wafers and package die to produce integrated circuits. Integrated circuits so manufactured are considered embodiments of the invention.
The particular configuration of image processing system 100 as shown in FIG. 1 is exemplary only, and the system 100 in other embodiments may include other elements in addition to or in place of those specifically shown, including one or more elements of a type commonly found in a conventional implementation of such a system.
For example, in some embodiments, the image processing system 100 is implemented as a video gaming system or other type of gesture-based system that processes image streams in order to recognize user gestures. The disclosed techniques can be similarly adapted for use in a wide variety of other systems requiring a gesture-based human-machine interface, and can also be applied to other applications, such as machine vision systems in robotics and other industrial applications that utilize gesture recognition.
Also, as indicated above, embodiments of the invention are not limited to use in recognition of hand gestures, but can be applied to other types of gestures as well. The term "gesture" as used herein is therefore intended to be broadly construed.
The operation of the GR system 1 10 of image processor 102 will now be described in greater detail with reference to the diagrams of FIGS. 2 through 6.
It is assumed in these embodiments that the input images 1 1 1 received in the image processor 102 from an image source comprise input depth images each referred to as an input frame. As indicated above, this source may comprise a depth imager such as an SL or ToF camera comprising a depth image sensor. Other types of image sensors including, for example, grayscale image sensors, color image sensors or infrared image sensors, may be used in other embodiments. A given image sensor typically provides image data in the form of one or more rectangular matrices of real or integer numbers corresponding to respective input image pixels. These matrices can contain per-pixel information such as depth values and corresponding amplitude or intensity values. Other per-pixel information such as color, phase and validity may additionally or alternatively be provided. Referring now to FIG. 2, a process 200 performed by the static pose recognition module 1 14 in an illustrative embodiment is shown. The process is assumed to be applied to preprocessed image frames received from a preprocessing subsystem of the set of additional subsystems 1 16. The preprocessing subsystem performs noise reduction and background estimation and removal, using techniques such as those identified above. The image frames are received by the preprocessing system as raw image data from an image sensor of a depth imager such as a ToF camera or other type of ToF imager. The image sensor in this embodiment is assumed to comprise a variable frame rate image sensor, such as a ToF image sensor configured to operate at a variable frame rate. Accordingly, in the present embodiment, the static pose recognition module 1 14 can operate at a lower frame rate than other recognition modules 1 15, such as recognition modules configured to recognize cursor gestures and dynamic gestures. Other types of sources supporting variable or fixed frame rates can be used in other embodiments.
The process 200 includes the following steps:
1. Find hand region of interest (ROI);
2. Find hand skeleton;
3. Find hand main direction;
4. Find palm boundary;
5. Scan hand image;
6. Estimate hand features;
7. Normalize hand features; and
8. Recognition based on classification.
Each of the above-listed steps of the process 200 will be described in greater detail below. In other embodiments, certain steps may be combined with one another, or additional or alternative steps may be used.
Step 1. Find hand ROI
This step in the present embodiment more particularly involves defining an ROI mask for a hand in the input image. The ROI mask is implemented as a binary mask in the form of an image, also referred to herein as a "hand image," in which pixels within the ROI are have a certain binary value, illustratively a logic 1 value, and pixels outside the ROI have the complementary binary value, illustratively a logic 0 value. The ROI corresponds to a hand within the input image, and is therefore also referred to herein as a hand ROI. An example of an ROI mask comprising a hand ROI can be seen in FIGS. 4 through 6 in the context of estimation of hand features. With reference to FIG. 5, the ROI mask is shown with 1 -valued or "white" pixels identifying those pixels within the ROI, and 0-valued or "black" pixels identifying those pixels outside of the ROI. It can be seen that the hand ROI in example of FIGS. 4, 5 and 6 is in the form of a particular type of static hand pose, namely, a "fingergun" static hand pose. This is one of multiple static hand poses that may be recognized using the process 200.
As noted above, the input image in which the hand ROI is identified in Step 1 is assumed to be supplied by a ToF imager. Such a ToF imager typically comprises a light emitting diode (LED) light source that illuminates an imaged scene. Distance is measured based on the time difference between the emission of light onto the scene from the LED source and the receipt at the image sensor of corresponding light reflected back from objects in the scene. Using the speed of light, one can calculate the distance to a given point on an imaged object for a particular pixel as a function of the time difference between emitting the incident light and receiving the reflected light. More particularly, distance d to the given point can be computed as follows:
Figure imgf000011_0001
where T is the time difference between emitting the incident light and receiving the reflected light, c is the speed of light, and the constant factor 2 is due to the fact that the light passes through the distance twice, as incident light from the light source to the object and as reflected light from the object back to the image sensor. This distance is more generally referred to herein as a depth value.
The time difference between emitting and receiving light may be measured, for example, by using a periodic light signal, such as a sinusoidal light signal or a triangle wave light signal, and measuring the phase shift between the emitted periodic light signal and the reflected periodic signal received back at the image sensor.
Assuming the use of a sinusoidal light signal, the ToF imager can be configured, for example, to calculate a correlation function c(r) between input reflected signal s(f) and output emitted signal g(t) shifted by predefined value τ, in accordance with the following equation:
C(T) = lim C '2 s(t)g{t + r)dt. In such an embodiment, the ToF imager is more particularly configured to utilize multiple phase images, corresponding to respective predefined phase shifts τ„ given by ηπ/2, where n = 0, 3. Accordingly, in order to compute depth and amplitude values for a given image pixel, the ToF imager obtains four correlation values (Ao, A3), where
Figure imgf000012_0001
and uses the following equations to calculate phase shift φ and amplitude a:
Figure imgf000012_0002
The phase images in this embodiment comprise respective sets of Ao, A\ , A2 and A3 correlation values computed for a set of image pixels. Using the phase shift φ, a depth value d can be calculated for a given image pixel as follows: ά = ^ψ· where ω is the frequency of emitted signal and c is the speed of light. These computations are repeated to generate depth and amplitude values for other image pixels. The resulting raw image data is transferred from the image sensor to internal memory of the image processor 102 for preprocessing in the manner previously described.
The hand ROI can be identified in the preprocessed image using any of a variety of techniques. For example, it is possible to utilize the techniques disclosed in the above-cited Russian Patent Application No. 2013135506 to determine the hand ROI. Accordingly, the first step of the process 200 may be implemented in a preprocessing block of the GR system 110 rather than in the static pose recognition module 114.
As another example, the hand ROI can be determined using threshold logic applied to depth and amplitude values of the image. This can be more particularly implemented as follows:
1. If the amplitude values are known for respective pixels of the image, one can select only those pixels with amplitude values greater than some predefined threshold. This approach is applicable not only for images from ToF imagers, but also for images from other types of imagers, such as infrared imagers with active lighting. For both ToF imagers and infrared imagers with active lighting, the closer an object is to the imager, the higher the amplitude values of the corresponding image pixels, not taking into account reflecting materials. Accordingly, selecting only pixels with relatively high amplitude values allows one to preserve close objects from an imaged scene and to eliminate far objects from the imaged scene. It should be noted that for ToF imagers, pixels with lower amplitude values tend to have higher error in their corresponding depth values, and so removing pixels with low amplitude values additionally protects one from using incorrect depth information.
2. If the depth values are known for respective pixels of the image, one can select only those pixels with depth values falling between predefined minimum and maximum threshold depths Dmin and Dmax. These thresholds are set to appropriate distances between which the hand is expected to be located within the image.
3. Opening or closing morphological operations utilizing erosion and dilation operators can be applied to remove dots and holes as well as other spatial noise in the image.
One possible implementation of a threshold-based OI determination technique using both amplitude and depth thresholds is as follows:
1. Set ROI /' = 0 for each i and j.
2. For each depth pixel di} set ROIy = 1 if dy > dmj„ and di} < dmax.
3. For each amplitude pixel ay set ROI, = 1 if ay > amin,
4. Coherently apply an opening morphological operation comprising erosion followed by dilation to both ROI and its complement to remove dots and holes comprising connected regions of ones and zeros having area less than a minimum threshold area Amm.
The output of the above-described ROI determination process is a binary ROI mask for the hand in the image. It can be in the form of an image having the same size as the input image, or a sub-image containing only those pixels that are part of the ROI. For further description below, it is assumed that the ROI mask is an image having the same size as the input image. As mentioned previously, the ROI mask is also referred to herein as a "hand image" and the ROI itself within the ROI mask is referred to as a "hand ROI." The output may include additional information such as an average of the depth values for the pixels in the ROI. This average of depth values for the ROI pixels is denoted elsewhere herein as meanZ.
Step 2. Find hand skeleton
Two exemplary techniques are described below for determining the hand skeleton in the hand image. These techniques are examples of what are more generally referred to herein as skeletonization operations, and other types of skeletonization operations can be used in other embodiments. The first exemplary technique below, denoted Technique A, is less computationally complex but also less precise than the second exemplary technique, denoted Technique B.
Technique A
For each row of the hand image containing at least one pixel of the ROI, store the middle point between the outermost left and right 1 values in the row as the skeleton value for that row. The hand skeleton comprises the set of stored points for the respective rows.
Technique B
1. Apply a closing morphological operation, comprising dilation followed by erosion, to the hand image in order to maximally conglutinate the top four fingers, resulting in what is referred to herein as a "closed" hand image. Average typical distance between open fingers may be used as a pattern size for both dilation and erosion operations.
2. Calculate the distance transform of the closed hand image. More particularly, for each pixel in the ROI, calculate the minimal distance from the ROI boundaries using specified distance metrics, such as, for example, Euclidian or Manhattan distance metrics. Boundaries on a binary mask can be identified as pixels with value 1 having at least one neighbor pixel with value 0. The distance transform outside of the ROI is 0. The result of the distance transform calculation is a distance transform matrix DT = (dt)ij.
3. For each row i in which there is at least one ROI pixel, compute dtmaxi = maxjdtij, and add to the skeleton all points (i, j i), (i, jki) so that for all k = 1 ,.ki, dtijt = iftmax, . There can be more than a single local maximum in each row, so k can be greater than 1, but usually k = 1.
For both Techniques A and B above, the resulting set of points is referred to herein as the hand skeleton SK = {(i i , j i), (ikS, jks)} , where SK is the set of skeleton points, and the cardinality of SK is ks.
Step 3. Find hand main direction
Exemplary techniques for finding the hand main direction described below include one of substeps la and l b, each possibly combined with an optional substep 2.
la. Approximate the set of points in the hand skeleton SK = {(ii, j i), (iks, jks)} by a prediction line using Least Mean Squares (LMS). It should be noted that the main direction is usually vertical or near-vertical. Accordingly, the angle of variation of the prediction line from the vertical axis is usually smaller than 45 degrees. Using abscissa x and ordinate y to indicate respective row and column numbers, the main direction is given by a prediction line y = a*x + b that minimizes the following quadratic functional: \2
Ml
It is possible to reverse the above definition of abscissa and ordinate so as to indicate respective column and row numbers, although the resulting prediction quality for main directions close to vertical is typically not as good.
The minimization above can be obtained by solving a system of two linear equations given by dFuvis/da = 0 and dFuvis/db = 0. This system of equations can be solved, for example, by computing a = (ks*Mxy - Mx*My)/(ks*Mxx - Mx*Mx), b = (My - Mx*a)/ks, where Mxy is a mixed raw moment for x,y, Mxx is a second-order raw moment for x, Myy is a second-order raw moment for y, Mx is a first-order raw moment for x, and My is a first-order raw moment for y. Other techniques can be used for solving the system of equations.
lb. Approximate the set of points in the hand skeleton SK = {(ii , j i), (iks, jks)} by a prediction line using Principal Component Analysis (PCA). In this embodiment, PCA determines the eigenvector corresponding to the largest eigenvalue of a covariance matrix CSK for a centered set SKc = { (ii - ic, j i -jc), ( - i¾ j i -jc)}, where ic is a mean row value and jc is a mean column value for the skeleton points. The line y = a*x + b is then determined as a = - 2*mxy/(myy - mxx), b = jc - ic*a, where mxy is a mixed centered moment for x,y, mxx is a second-order centered moment for x, and myy is a second-order centered moment for y.
2. Compute the average deviation δ of distances between points of the skeleton and the prediction line found during substep la or lb above, and remove all points of the skeleton with deviation greater than k5, where k>0 (e.g., k=3). The removed points are also referred to herein as "outliers." In order to simplify the calculations in this substep, Cartesian distance can be substituted by difference in y (i.e., column) coordinates of points. Substep la or lb is then rerun to obtain a new estimate of the main direction. Substep 2 can be repeated until the set of points removed is empty.
After a given prediction line y = a*x + b is determined in substeps la or lb above, the angle between the vertical axis and prediction line can be computed as angle = - arctg(a), where arctg denotes "arctangent." This angle is an example of what is more generally referred to herein as a "main direction" of a hand. Accordingly, hand main direction can be characterized by the prediction line itself, by an angle made by the prediction line relative to the vertical axis, or by other information based on the prediction line.
FIG. 3 illustrates an exemplary process of determining a main direction of a hand using the above-described substeps. The process starts with a skeleton 300 and includes steps 302 through 310. In step 302, the prediction line y = a*x + b is found using either the LMS or PCA method of respective substeps la or l b. In step 304, the main direction of the hand is determined by computing the angle = - arctg(a). In step 306, any outliers are determined, as those points of the skeleton having a distance from the prediction line that is greater than greater than k5. If the number of outliers is determined to be greater than zero in step 308, the outliers are excluded from the skeleton in step 310, and otherwise the process ends by outputting the angle and the prediction line parameters a and b as indicated in step 312. From step 310, a feedback line 314 returns the process to step 302 to recompute the prediction line with the outliers excluded from the skeleton as described in substep 2 above. Each time the process is repeated, additional outliers are excluded via step 310 and the prediction line is recomputed in step 302 using the resulting reduced set of skeleton points. The feedback may be limited to a specified maximum number of passes through the process.
Step 4. Find palm boundary
This step in the present embodiment more particularly involves defining the palm boundary and removing from the ROI any pixels below the palm boundary, leaving essentially only the palm and fingers in a modified hand image. Such a step advantageously eliminates, for example, any portions of the arm from the wrist to the elbow, as these portions can be highly variable due to the presence of items such as sleeves, wristwatches and bracelets, and in any event are typically not useful for static hand pose recognition.
Exemplary techniques that are suitable for use in implementing the palm boundary determination in the present embodiment are described in Russian Patent Application No. 2013134325, filed July 22, 2013 and entitled "Gesture Recognition Method and Apparatus Based on Analysis of Multiple Candidate Boundaries," which is commonly assigned herewith and incorporated by reference herein.
Alternative techniques can be used. For example, the palm boundary may be determined by taking into account that the typical length of the human hand is about 20-25 centimeters (cm), and removing from the ROI all pixels located farther than a 25 cm threshold distance from the uppermost fingertip along the previously-determined main direction of the hand. The uppermost fingertip can be identified as the uppermost point of the hand skeleton or as the uppermost 1 value in the binary ROI mask. The 25 cm threshold can be converted to a particular number of image pixels by using an average depth value determined for the pixels in the ROI as mentioned in conjunction with the description of Step 1 above. Step 5. Scan hand image
This step in the present embodiment more particularly involves scanning the modified hand image resulting from Step 4. The scanning is performed line-by-line over lines that are perpendicular to the main direction line previously determined in Step 3. In conjunction with this step, the ROI mask is effectively modified so as to correspond to a vertically-oriented hand. This can be achieved by rotating the existing ROI mask by an angle a, where a is the angle between the main direction and the vertical axis as determined in Step 3, but such rotation is not computationally efficient for binary masks. Instead, perpendiculars to the main direction line are determined, and the hand image is scanned line-by-line along such perpendiculars. The latter approach may be considered a type of "virtual" rotation of the ROI mask, as opposed to a "real" rotation of the ROI mask by the angle a.
Two exemplary techniques are described below for determining a perpendicular to the main direction line, although other techniques can be used in other embodiments. The first exemplary technique below, denoted Technique A, is less computationally complex but also less precise than the second exemplary technique, denoted Technique B.
Technique A
Let y=A*x+B be a perpendicular to the main direction, assuming that the main direction cannot be a horizontal line. Let W be the width of the hand ROI, given by the difference between the column numbers of the leftmost and the rightmost ROI pixels. Then for each value of x=l ... W let y[x] = round(Ax+B), where round(x) denotes the closest integer value to x. The array of y[x], x=l . ..W forms a discrete perpendicular to the main direction. Movement of the discrete perpendicular from the top image row to the bottom image row with a step size equal to 1 pixel between adjacent instances of the perpendicular will cover the entire image. However, the resulting ROI mask will have non-square pixels, such that correction coefficients (l/sin(oc) and l/cos(a)) are used to normalize the hand features in Step 7.
Technique B
This technique uses the angle a to calculate the perpendicular to the main direction, but scans using precise steps that are equal to 1 pixel both for movement along a given perpendicular to the main direction line and for movement along the main direction line from perpendicular to perpendicular. The coordinates can be rounded to nearest integer values or various types of interpolation (e.g., bilinear, bi-cubic, etc.) can be applied.
The following pseudocode example illustrates the technique in more detail. In this pseudocode example, the notation (jtip,itip) identifies the coordinates of the uppermost pixel in the hand ROI. #define W 165 // Image width
#define H 120 // Image height
find__hand__direction (skeleton, a, b) ; // skeleton - input; a, b - output float c = itip + a*jtip; // perpendicular line crossing point (jtip, itip) : y = - a*x + c
float alpha = arctg(a), sina = sin (alpha), cosa = cos (alpha);
float xx[165] , yy[165] ;
for (int j=0; j< ; j++) { xx[j] = jtip + ( j -W/2 ) *cosa ; yy [ ] = -a * xx[j] + c; )
cv:: at mask; mask . create (H, W, CV_32F) ; mask = O.Of;
ttdefine INSIDE (x, y) (y>=l && y<=H-2 && x>=l && x<=W-2)
jleft = W-l; j right = 0;
int itop = H-l; ibottom = 0;
for (int i=0; i<H; i++)
{
for (int j=0; j<W; j++)
{
if (INSIDE (x [j ], yy[j ] ) &&
roi.at<float>(int(yy[j] ) ,int(xx[j] ) )>=0.5f)
{
j left=mi ( j left , j ) ; right=max ( j right, j ) ;
itop=min ( itop , i ) ; ibottom=max ( ibottom, i ) ;
mask. at<float> (i, ) = l.Of;
}
xx[j] += sina; yy[j] += cosa;
)
if (yy[0]>H-l && yy[W-l]>H-l) break;
}
Application of either of the above techniques results in an ROI mask that is effectively modified so as to correspond to a vertically-oriented hand. This modified ROI mask is also referred to herein as a vertically-oriented ROI mask. As mentioned previously, it is possible to obtain the modified ROI mask by performing a real rotation of the hand ROI by the angle a, although such a rotation would typically be less efficient than the exemplary virtual rotation techniques described above.
It should also be noted that at least a portion of the hand feature estimations described in Step 6 below may be performed in conjunction with the above-described scanning process. If in a given embodiment it is possible to calculate all of the desired hand features using a single pass of image scanning, one need not store the vertically-oriented ROI mask itself.
Step 6. Estimate hand features
This step generally involves estimating hand features using the vertically-oriented ROI mask described above. The estimated hand features, after any needed normalization in Step 7, are provided as input to classifiers configured to recognize particular static poses in Step 8. As mentioned previously, the estimation of the hand features can be performed as part of the image scanning of Step 5, in which case both Step 5 and Step 6 can be performed as a single combined step of the process 200. At least portions of Step 7 may also be implemented in such a combined step.
The use of a vertically-oriented ROI mask to estimate the hand features advantageously reduces the dimensionality of the operation and therefore improves its performance.
The hand features determined using the vertically-oriented ROI mask in Step 6 include at least a subset of the following features:
1. Square root of the hand area, where the hand area is defined as the number of ROI pixels with value 1.
2. Perimeter of the hand, given by the number of ROI pixels with value 1 which have at least one neighbor pixel with value 0.
3. Width of the hand, given by the difference between the column numbers of the leftmost and the rightmost ROI pixels.
4. Height of the hand, given by the difference between the row numbers of the uppermost and the lowermost ROI pixels.
5. Second-order centered moments for x and y coordinates of the ROI pixels.
6. Square root of the top finger area, where the top finger area is defined as the number of ROI pixels that are not farther than htop cm from the uppermost ROI pixel. An exemplary value for htop is htop = 2, although other values could be used. The top finger area used in this feature is illustrated in FIG. 4 as the darkened portion of the tip of the pointing finger. The line 400 indicates the main direction line of the hand in the ROI mask.
7. Square root of the side finger area, where the side finger area is defined as the minimum of the number of ROI pixels that are not farther than hieft cm from the leftmost ROI pixel and the number of ROI pixels that are not farther than hri ht cm from rightmost ROI pixel. Exemplary values for hieft and hrjght are hieft = 2 and hright = 2, although again other values could be used. The side finger area computation is performed by minimization element 402 in FIG. 4 using the darkened areas shown at left and right sides of the ROI mask.
8. Degree of non-convexity, given by the square root of the number of pixels with value 0 that are bordered by at least two ROI pixels with value 1 as determined while scanning the hand image along perpendiculars to the main direction as per Step 5. This is illustrated in FIG. 5, which shows a set of mask scanning lines 500 corresponding to respective perpendiculars of the main direction line 400 of the ROI mask. The identified 0-valued pixels are in two regions of the image, one in the trough between the thumb and forefinger and the other between a pair of knuckles of the hand, and the numbers of pixels in these two regions are combined by a summing element 502. The output of the summing element 502 is subject to a square root operation not specifically illustrated in the figure in order to generate the feature. The degree of non-convexity is equal to zero for all convex ROIs.
9. Degree of "egg-likeness." Assume that the height of the hand is H, and that wi = Wi/4, W2 = Wi/2 and w3 = W3/4 are the widths of the hand at respective heights hi = ¼*H, h2 = 1/2*H and h3 = ¾*H. Using the three points (hi, wi), (h2, w2) and (h3, w3) in two-dimensional space, find a parabola of the form w(h) = ai *h2 + a2*h + a3 that goes through all three points. This feature is illustrated in FIG. 6, based on a mask profile 600 used to generate a parabola 602. The degree of "egg-likeness" is illustratively given by the curvature of the parabola as expressed by the first coefficient ai .
The above-described hand features can all be calculated at relatively low complexity using one or at most two scanning passes through the ROI mask.
It should be noted that the above-described hand features are exemplary only, and additional or alternative hand features may be utilized to facilitate static pose recognition in other embodiments. For example, various functions of one or more of the above-described hand features or other related hand features may be used as additional or alternative hand features. Thus, functions other than square root may be used in conjunction with hand area, top finger area, side finger area or other features. Also, techniques other than those described above may be used to compute the features.
The particular number of features utilized in a given embodiment will typically depend on factors such as the number of different hand pose classes to be recognized, the shape of an average hand inside each class, and the recognition quality requirements. Techniques such as Monte-Carlo simulations or genetic search algorithms can be utilized to determine an optimal subset of the features for given levels of computational complexity and recognition quality.
As one example, a pointing gesture detector having only three distinct classes, corresponding to pointing forefinger, pointing forefinger with open thumb ("fingergun"), and all other static hand poses, respectively, can achieve an approximately 0.995 recognition rate using the subset of features 1 , 2, 3, 6, 7 and 8.
Step 7. Normalize hand features
The previously-described steps result in an arrangement in which hand features are invariant to certain image transformations, such as rotation and movement. However, the hand features may also be made invariant to scaling by applying feature normalization as will now be described. It should again be noted that if Technique A is utilized for hand image scanning in Step 5, correction coefficients should be applied to take into account that the pixels of the scanned ROI mask are no longer square, although application of such correction coefficients does not significantly increase computational complexity.
The additional feature normalization can then be implemented as follows. If the average depth value for the ROI pixels is not available, linear features such as width, height and perimeter are normalized by dividing each such linear feature by the square root of the hand area, while second order features such as moments are normalized by dividing each such second order feature by the hand area itself. If the average depth value for the ROI pixels is available, linear features are instead multiplied by the average depth value and second order features are multiplied by the square of the average depth value.
The latter normalization based on the average depth value can be better understood by considering the correspondence between the size of a portion of an imaged object as captured in a given pixel and the size of that portion of the imaged object in real units (e.g., meters). This correspondence can be computed as pixel_size_in_meters = meanZ * tan(horzFOV/2)/(W/2), where meanZ denotes the average depth value as mentioned in conjunction with Step 1 above, W denotes hand width, and horzFOV denotes horizontal angle of field of view (e.g., 90 degrees). The normalized feature is then given by normalized_feature_in_meters = feature_inj>ixels * pixel size in meters. It is therefore apparent that linear features should be multiplied by a coefficient proportional to the average depth value, and that features of higher order should be multiplied by a coefficient proportional to the average depth value to that order, as in the normalization previously described.
Step 8. Recognition based on classification
In this step, classification techniques are applied to recognize static hand poses based on the normalized hand features from Step 7. Examples of static pose classes that may be utilized in a given embodiment include finger, palm with fingers, palm without fingers, hand edge, pinch, fist, fingergun and head. Each static pose class utilizes a corresponding classifier configured in accordance with a classification technique such as, for example, Gaussian Mixture Models (GMMs), Nearest Neighbor, Decision Trees, and Neural Networks. Additional details regarding the use of classifiers based on GMMs in the recognition of static hand poses can be found in the above-cited Russian Patent Application No. 2013134325.
The particular types and arrangements of processing blocks shown in the embodiments of
FIGS. 2 and 3 are exemplary only, and additional or alternative blocks can be used in other embodiments. For example, blocks illustratively shown as being executed serially in the figures can be performed at least in part in parallel with one or more other blocks or in other pipelined configurations in other embodiments. The illustrative embodiments provide significantly improved gesture recognition performance relative to conventional arrangements. For example, these embodiments provide computationally-efficient static pose recognition using estimated hand features that are substantially invariant to hand orientation within an image and in some cases also substantially invariant to scale and movement of the hand within an image. This avoids the need for complex hand image normalizations that would otherwise be required to deal with variations in hand orientation, scale and movement. Accordingly, the GR system performance is accelerated while ensuring high precision in the recognition process. The disclosed techniques can be applied to a wide range of different GR systems, using depth, grayscale, color infrared and other types of imagers which support a variable frame rate, as well as imagers which do not support a variable frame rate.
Different portions of the GR system 1 10 can be implemented in software, hardware, firmware or various combinations thereof. For example, software utilizing hardware accelerators may be used for some processing blocks while other blocks are implemented using combinations of hardware and firmware.
At least portions of the GR-based output 1 12 of GR system 1 10 may be further processed in the image processor 102, or supplied to another processing device 106 or image destination, as mentioned previously.
It should again be emphasized that the embodiments of the invention as described herein are intended to be illustrative only. For example, other embodiments of the invention can be implemented utilizing a wide variety of different types and arrangements of image processing circuitry, modules, processing blocks and associated operations than those utilized in the particular embodiments described herein. In addition, the particular assumptions made herein in the context of describing certain embodiments need not apply in other embodiments. These and numerous other alternative embodiments within the scope of the following claims will be readily apparent to those skilled in the art.

Claims

Claims What is claimed is:
1. A method comprising steps of:
identifying a hand region of interest in at least one image;
performing a skeletonization operation on the hand region of interest;
determining a main direction of the hand region of interest utilizing a result of the skeletonization operation;
performing a scanning operation on the hand region of interest utilizing the determined main direction to estimate a plurality of hand features that are substantially invariant to hand orientation; and
recognizing a static pose of the hand region of interest based on the estimated hand features;
wherein the steps are implemented in an image processor comprising a processor coupled to a memory.
2. The method of claim 1 wherein the steps are implemented in a static pose recognition module of a gesture recognition system of the image processor.
3. The method of claim 2 wherein the static pose recognition module operates at a lower frame rate than at least one other recognition module of the gesture recognition system.
4. The method of claim 1 wherein identifying a hand region of interest comprises generating a hand image comprising a binary region of interest mask in which pixels within the hand region of interest all have a first binary value and pixels outside the hand region of interest all have a second binary value complementary to the first binary value.
5. The method of claim 1 wherein the result of the skeletonization operation comprises a hand skeleton comprising a set of skeleton points.
6. The method of claim 5 wherein performing a skeletonization operation on the hand region of interest comprises, for each of a plurality of rows of the hand region of interest, selecting a middle point between outermost left and right pixels of the hand region of interest as a skeleton point for that row.
7. The method of claim 5 wherein performing a skeletonization operation on the hand region of interest comprises:
applying a closing morphological operation to a hand image containing the hand region of interest to generate a closed hand image;
computing a distance transform for the closed hand image; and
selecting the skeleton points based on the distance transform.
8. The method of claim 1 wherein determining a main direction of the hand region of interest comprises:
determining a prediction line based on a set of skeleton points;
obtaining the main direction from the prediction line;
identifying skeleton points located more than a threshold distance from the prediction line;
eliminating the identified skeleton points from the set of the skeleton points to generate an updated set of skeleton points; and
repeating the determining, obtaining, identifying and eliminating for one or more additional iterations until a designated minimum number of identified skeleton points is reached or a designated maximum number of iterations is reached.
9. The method of claim 1 further comprising:
identifying a palm boundary of the hand region of interest; and
modifying the hand region of interest to exclude from the hand region of interest any pixels below the identified palm boundary.
10. The method of claim 1 wherein performing a scanning operation utilizing the determined main direction comprises:
determining a plurality of lines perpendicular to a line of the main direction; and scanning the hand region of interest along the perpendicular lines.
1 1. The method of claim 1 wherein the hand features include one or more following hand features or functions thereof:
an area of the hand region of interest;
a perimeter of the hand region of interest;
a width of the hand region of interest; and a height of the hand region of interest.
12. The method of claim 1 wherein the hand features include second-order centered moments or functions thereof for coordinates of pixels of the hand region of interest.
13. The method of claim 1 wherein the hand features include one or more of the following hand features or functions thereof:
a top finger area;
a side finger area; and
degree of non-convexity.
14. The method of claim 1 wherein the hand features include one or more coefficients of a parabola fit to points given by widths of the hand region of interest at respective specified heights of the hand region of interest.
15. A non-transitory computer-readable storage medium having computer program code embodied therein, wherein the computer program code when executed in the image processor causes the image processor to perform the method of claim 1.
16. An apparatus comprising:
an image processor comprising image processing circuitry and an associated memory;
wherein the image processor is configured to implement a gesture recognition system utilizing the image processing circuitry and the memory, the gesture recognition system comprising a static pose recognition module; and
wherein the static pose recognition module is configured to identify a hand region of interest in at least one image, to perform a skeletonization operation on the hand region of interest, to determine a main direction of the hand region of interest utilizing a result of the skeletonization operation, to perform a scanning operation on the hand region of interest utilizing the determined main direction to estimate a plurality of hand features that are substantially invariant to hand orientation, and to recognize a static pose of the hand region of interest based on the estimated hand features.
17. The apparatus of claim 16 wherein the static pose recognition module is configured to determine a main direction of the hand region of interest by determining a prediction line based on a set of skeleton points, obtaining the main direction from the prediction line, identifying skeleton points located more than a threshold distance from the prediction line, eliminating the identified skeleton points from the set of the skeleton points to generate an updated set of skeleton points, and repeating the determining, obtaining, identifying and eliminating for one or more additional iterations until a designated minimum number of identified skeleton points is reached or a designated maximum number of iterations is reached.
18. The apparatus of claim 16 wherein the static pose recognition module is configured to perform a scanning operation utilizing the determined main direction by determining a plurality of lines perpendicular to a line of the main direction and scanning the hand region of interest along the perpendicular lines.
19. An integrated circuit comprising the apparatus of claim 16.
20. An image processing system comprising the apparatus of claim 16.
PCT/US2014/036339 2013-10-30 2014-05-01 Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition WO2015065520A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/358,320 US20150161437A1 (en) 2013-10-30 2014-05-01 Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
RU2013148582 2013-10-30
RU2013148582/08A RU2013148582A (en) 2013-10-30 2013-10-30 IMAGE PROCESSING PROCESSOR CONTAINING A GESTURE RECOGNITION SYSTEM WITH A COMPUTER-EFFECTIVE FIXED HAND POSITION RECOGNITION

Publications (1)

Publication Number Publication Date
WO2015065520A1 true WO2015065520A1 (en) 2015-05-07

Family

ID=53004900

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/036339 WO2015065520A1 (en) 2013-10-30 2014-05-01 Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition

Country Status (3)

Country Link
US (1) US20150161437A1 (en)
RU (1) RU2013148582A (en)
WO (1) WO2015065520A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2924543B1 (en) * 2014-03-24 2019-12-04 Tata Consultancy Services Limited Action based activity determination system and method
US10649536B2 (en) * 2015-11-24 2020-05-12 Intel Corporation Determination of hand dimensions for hand and gesture recognition with a computing interface
US10318008B2 (en) 2015-12-15 2019-06-11 Purdue Research Foundation Method and system for hand pose detection
US10636156B2 (en) 2016-09-12 2020-04-28 Deepixel Inc. Apparatus and method for analyzing three-dimensional information of image based on single camera and computer-readable medium storing program for analyzing three-dimensional information of image
US10521947B2 (en) * 2017-09-29 2019-12-31 Sony Interactive Entertainment Inc. Rendering of virtual hand pose based on detected hand input
EP3677997B1 (en) * 2019-01-03 2021-10-13 HTC Corporation Electronic system and controller
CN110569817B (en) * 2019-09-12 2021-11-02 北京邮电大学 System and method for realizing gesture recognition based on vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105772A1 (en) * 1998-08-10 2005-05-19 Nestor Voronka Optical body tracker
US20060010400A1 (en) * 2004-06-28 2006-01-12 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
US20120309532A1 (en) * 2011-06-06 2012-12-06 Microsoft Corporation System for finger recognition and tracking

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8872899B2 (en) * 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
US20120314031A1 (en) * 2011-06-07 2012-12-13 Microsoft Corporation Invariant features for computer vision
US9275277B2 (en) * 2013-02-22 2016-03-01 Kaiser Foundation Hospitals Using a combination of 2D and 3D image data to determine hand features information
US9020194B2 (en) * 2013-06-14 2015-04-28 Qualcomm Incorporated Systems and methods for performing a device action based on a detected gesture
US9436872B2 (en) * 2014-02-24 2016-09-06 Hong Kong Applied Science and Technology Research Institute Company Limited System and method for detecting and tracking multiple parts of an object
US10310675B2 (en) * 2014-08-25 2019-06-04 Canon Kabushiki Kaisha User interface apparatus and control method
US20160078289A1 (en) * 2014-09-16 2016-03-17 Foundation for Research and Technology - Hellas (FORTH) (acting through its Institute of Computer Gesture Recognition Apparatuses, Methods and Systems for Human-Machine Interaction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105772A1 (en) * 1998-08-10 2005-05-19 Nestor Voronka Optical body tracker
US20060010400A1 (en) * 2004-06-28 2006-01-12 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
US20120309532A1 (en) * 2011-06-06 2012-12-06 Microsoft Corporation System for finger recognition and tracking

Also Published As

Publication number Publication date
RU2013148582A (en) 2015-05-10
US20150161437A1 (en) 2015-06-11

Similar Documents

Publication Publication Date Title
US20150253864A1 (en) Image Processor Comprising Gesture Recognition System with Finger Detection and Tracking Functionality
US20150253863A1 (en) Image Processor Comprising Gesture Recognition System with Static Hand Pose Recognition Based on First and Second Sets of Features
US20150161437A1 (en) Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition
US20150278589A1 (en) Image Processor with Static Hand Pose Recognition Utilizing Contour Triangulation and Flattening
US9384556B2 (en) Image processor configured for efficient estimation and elimination of foreground information in images
CN110992356B (en) Target object detection method and device and computer equipment
WO2020108311A1 (en) 3d detection method and apparatus for target object, and medium and device
US9852495B2 (en) Morphological and geometric edge filters for edge enhancement in depth images
US10872227B2 (en) Automatic object recognition method and system thereof, shopping device and storage medium
US20160026857A1 (en) Image processor comprising gesture recognition system with static hand pose recognition based on dynamic warping
US20150206318A1 (en) Method and apparatus for image enhancement and edge verificaton using at least one additional image
US10140513B2 (en) Reference image slicing
US20150286859A1 (en) Image Processor Comprising Gesture Recognition System with Object Tracking Based on Calculated Features of Contours for Two or More Objects
WO2010135617A1 (en) Gesture recognition systems and related methods
US20150023607A1 (en) Gesture recognition method and apparatus based on analysis of multiple candidate boundaries
US20150269425A1 (en) Dynamic hand gesture recognition with selective enabling based on detected hand velocity
US20210142039A1 (en) Apparatus and method for identifying an articulatable part of a physical object using multiple 3d point clouds
US20150310264A1 (en) Dynamic Gesture Recognition Using Features Extracted from Multiple Intervals
US20150262362A1 (en) Image Processor Comprising Gesture Recognition System with Hand Pose Matching Based on Contour Features
WO2014133584A1 (en) Image processor with multi-channel interface between preprocessing layer and one or more higher layers
US20150139487A1 (en) Image processor with static pose recognition module utilizing segmented region of interest
US20160247286A1 (en) Depth image generation utilizing depth information reconstructed from an amplitude image
CN117581275A (en) Eye gaze classification
US9323995B2 (en) Image processor with evaluation layer implementing software and hardware algorithms of different precision
US20150278582A1 (en) Image Processor Comprising Face Recognition System with Face Recognition Based on Two-Dimensional Grid Transform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14858918

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14858918

Country of ref document: EP

Kind code of ref document: A1