WO2015060896A1 - Finite state machine cursor and dynamic gesture detector recognition - Google Patents

Finite state machine cursor and dynamic gesture detector recognition Download PDF

Info

Publication number
WO2015060896A1
WO2015060896A1 PCT/US2014/035838 US2014035838W WO2015060896A1 WO 2015060896 A1 WO2015060896 A1 WO 2015060896A1 US 2014035838 W US2014035838 W US 2014035838W WO 2015060896 A1 WO2015060896 A1 WO 2015060896A1
Authority
WO
WIPO (PCT)
Prior art keywords
cursor
detector
dynamic gesture
current frame
state machine
Prior art date
Application number
PCT/US2014/035838
Other languages
French (fr)
Inventor
Pavel A. ALISEYCHIK
Aleksey A. LETUNOVSKIY
Ivan L. MAZURENKO
Alexander A. PETYUSHKO
Denis V. ZAYTSEV
Original Assignee
Lsi Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lsi Corporation filed Critical Lsi Corporation
Priority to US14/358,358 priority Critical patent/US20150220153A1/en
Publication of WO2015060896A1 publication Critical patent/WO2015060896A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the field relates generally to image processing, and more particularly to image processing for recognition of gestures.
  • Image processing is important in a wide variety of different applications, and such processing may involve two-dimensional (2D) images, three-dimensional (3D) images, or combinations of multiple images of different types.
  • a 3D image of a spatial scene may be generated in an image processor using triangulation based on multiple 2D images captured by respective cameras arranged such that each camera has a different view of the scene.
  • a 3D image can be generated directly using a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera.
  • SL structured light
  • ToF time of flight
  • raw image data from an image sensor is usually subject to various preprocessing operations.
  • the preprocessed image data is then subject to additional processing used to recognize gestures in the context of particular gesture recognition applications.
  • Such applications may be implemented, for example, in video gaming systems, kiosks or other systems providing a gesture-based user interface.
  • These other systems include various electronic consumer devices such as laptop computers, tablet computers, desktop computers, mobile phones and television sets.
  • an image processing system comprises an image processor having image processing circuitry and an associated memory.
  • the image processor is configured to implement a gesture recognition system utilizing the image processing circuitry and the memory, with the gesture recognition system comprising a cursor detector, a dynamic gesture detector, a static pose recognition module, and a finite state machine configured to control selective enabling of the cursor detector, the dynamic gesture detector and the static pose recognition module.
  • the finite state machine has a plurality of states including a cursor detected state in which cursor location and tracking are applied responsive to detection of a cursor in a current frame, a dynamic gesture detected state in which dynamic gesture recognition is applied responsive to detection of a dynamic gesture in the current frame, and a static pose recognition state in which static pose recognition is applied responsive to failure to detect a cursor or a dynamic gesture in the current frame.
  • inventions include but are not limited to methods, apparatus, systems, processing devices, integrated circuits, and computer-readable storage media having computer program code embodied therein.
  • FIG. 1 is a block diagram of an image processing system comprising an image processor implementing a gesture recognition process in an illustrative embodiment.
  • FIG. 2 shows a more detailed view of an exemplary gesture recognition system of the image processor of FIG. 1.
  • FIG. 3 illustrates an embodiment of a recognition subsystem of the gesture recognition system of FIG. 2 without a finite state machine and cursor and dynamic gesture detectors.
  • FIG. 4 illustrates an embodiment of a recognition subsystem of the gesture recognition system of FIG. 2 with a finite state machine and cursor and dynamic gesture detectors.
  • FIG. 5 shows a more detailed view of portions of the recognition subsystem in the FIG. 4 embodiment.
  • FIG. 6 shows an exemplary state update module for the finite state machine of the recognition subsystem in the FIG. 4 embodiment.
  • Embodiments of the invention will be illustrated herein in conjunction with exemplary image processing systems that include image processors or other types of processing devices configured to perform gesture recognition. It should be understood, however, that embodiments of the invention are more generally applicable to any image processing system or associated device or technique that involves recognizing gestures in one or more images.
  • FIG. 1 shows an image processing system 100 in an embodiment of the invention.
  • the image processing system 100 comprises an image processor 102 that is configured for communication over a network 104 with a plurality of processing devices 106-1, 106-2, , . . 106-M.
  • the image processor 102 implements a recognition subsystem 108 within a gesture recognition (GR) system 1 10.
  • the GR system 1 10 in this embodiment processes input images 11 1 from one or more image sources and provides corresponding GR-based output 1 12.
  • the GR-based output 1 12 may be supplied to one or more of the processing devices 106 or to other system components not specifically illustrated in this diagram.
  • the recognition subsystem 108 of GR system 110 more particularly comprises cursor and dynamic gesture detectors 1 13, a static pose recognition module 1 14, and a finite state machine 115 configured to control selective enabling of the cursor detector, the dynamic gesture detector and the static pose recognition module.
  • the operation of illustrative embodiments of the GR system 1 10 of image processor 102 will be described in greater detail below in conjunction with FIGS. 2 through 6.
  • the recognition subsystem 108 receives inputs from additional subsystems 1 16, which may comprise one or more image processing subsystems configured to implement functional blocks associated with gesture recognition in the GR system 110, such as, for example, functional blocks for input frame acquisition, noise reduction or other types of preprocessing, and background estimation and removal. It should be understood, however, that these particular functional blocks are exemplary only, and other embodiments of the invention can be configured using other arrangements of additional or alternative functional blocks.
  • the recognition subsystem 108 generates GR events for consumption by one or more of a set of GR applications 118.
  • the GR events may comprise information indicative of recognition of one or more particular gestures within one or more frames of the input images 1 1 1, such that a given GR application in the set of GR applications 118 can translate that information into a particular command or set of commands to be executed by that application.
  • the GR system 1 10 may provide GR events or other information, possibly generated by one or more of the GR applications 118, as GR-based output 112. Such output may be provided to one or more of the processing devices 106. In other embodiments, at least a portion of set of GR applications 1 18 is implemented at least in part on one or more of the processing devices 106.
  • Portions of the GR system 1 10 may be implemented using separate processing layers of the image processor 102. These processing layers comprise at least a portion of what is more generally referred to herein as "image processing circuitry" of the image processor 102.
  • the image processor 102 may comprise a preprocessing layer implementing a preprocessing module and a plurality of higher processing layers for performing other functions associated with recognition of gestures within frames of an input image stream comprising the input images 1 1 1.
  • Such processing layers may also be implemented in the form of respective subsystems of the GR system 1 10.
  • embodiments of the invention are not limited to recognition of static or dynamic hand gestures, but can instead be adapted for use in a wide variety of other machine vision applications involving gesture recognition, and may comprise different numbers, types and arrangements of modules, subsystems, processing layers and associated functional blocks.
  • processing operations associated with the image processor 102 in the present embodiment may instead be implemented at least in part on other devices in other embodiments.
  • preprocessing operations may be implemented at least in part in an image source comprising a depth imager or other type of imager that provides at least a portion of the input images 1 11.
  • one or more of the applications 1 18 may be implemented on a different processing device than the subsystems 108 and 1 16, such as one of the processing devices 106.
  • image processor 102 may itself comprise multiple distinct processing devices, such that different portions of the GR system 1 10 are implemented using two or more processing devices.
  • image processor as used herein is intended to be broadly construed so as to encompass these and other arrangements.
  • the GR system 1 10 performs preprocessing operations on received input images 111 from one or more image sources.
  • This received image data in the present embodiment is assumed to comprise raw image data received from a depth sensor, but other types of received image data may be processed in other embodiments.
  • Such preprocessing operations may include noise reduction and background removal.
  • the raw image data received by the GR system 1 10 from the depth sensor may include a stream of frames comprising respective depth images, with each such depth image comprising a plurality of depth image pixels.
  • a given depth image D may be provided to the GR system 1 10 in the form of matrix of real values.
  • a given such depth image is also referred to herein as a depth map.
  • image is intended to be broadly construed.
  • the image processor 102 may interface with a variety of different image sources and image destinations.
  • the image processor 102 may receive input images 1 1 1 from one or more image sources and provide processed images as part of GR-based output 112 to one or more image destinations. At least a subset of such image sources and image destinations may be implemented as least in part utilizing one or more of the processing devices 106. Accordingly, at least a subset of the input images 1 1 1 may be provided to the image processor 102 over network 104 for processing from one or more of the processing devices 106. Similarly, processed images or other related GR-based output 1 12 may be delivered by the image processor 102 over network 104 to one or more of the processing devices 106. Such processing devices may therefore be viewed as examples of image sources or image destinations as those terms are used herein.
  • a given image source may comprise, for example, a 3D imager such as an SL camera or a ToF camera configured to generate depth images, or a 2D imager configured to generate grayscale images, color images, infrared images or other types of 2D images. It is also possible that a single imager or other image source can provide both a depth image and a corresponding 2D image such as a grayscale image, a color image or an infrared image. For example, certain types of existing 3D cameras are able to produce a depth map of a given scene as well as a 2D image of the same scene. Alternatively, a 3D imager providing a depth map of a given scene can be arranged in proximity to a separate high-resolution video camera or other 2D imager providing a 2D image of substantially the same scene.
  • An image source is a storage device or server that provides images to the image processor 102 for processing.
  • a given image destination may comprise, for example, one or more display screens of a human-machine interface of a computer or mobile phone, or at least one storage device or server that receives processed images from the image processor 102.
  • the image processor 102 may be at least partially combined with at least a subset of the one or more image sources and the one or more image destinations on a common processing device.
  • a given image source and the image processor 102 may be collectively implemented on the same processing device.
  • a given image destination and the image processor 102 may be collectively implemented on the same processing device.
  • the image processor 102 is configured to recognize hand gestures, although the disclosed techniques can be adapted in a straightforward manner for use with other types of gesture recognition processes.
  • the input images 1 1 1 may comprise respective depth images generated by a depth imager such as an SL camera or a ToF camera.
  • a depth imager such as an SL camera or a ToF camera.
  • Other types and arrangements of images may be received, processed and generated in other embodiments, including 2D images or combinations of 2D and 3D images.
  • the particular arrangement of subsystems, applications and other components shown in image processor 102 in the FIG. 1 embodiment can be varied in other embodiments.
  • an otherwise conventional image processing integrated circuit or other type of image processing circuitry suitably modified to perform processing operations as disclosed herein may be used to implement at least a portion of one or more of the components 113, 1 14, 1 15, 116 and 1 18 of image processor 102.
  • One possible example of image processing circuitry that may be used in one or more embodiments of the invention is an otherwise conventional graphics processor suitably reconfigured to perform functionality associated with one or more of the components 1 13, 1 14, 1 15, 1 16 and 1 18.
  • the processing devices 106 may comprise, for example, computers, mobile phones, servers or storage devices, in any combination. One or more such devices also may include, for example, display screens or other user interfaces that are utilized to present images generated by the image processor 102, The processing devices 106 may therefore comprise a wide variety of different destination devices that receive processed image streams or other types of GR-based output 112 from the image processor 102 over the network 104, including by way of example at least one server or storage device that receives one or more processed image streams from the image processor 102.
  • the image processor 102 may be at least partially combined with one or more of the processing devices 106.
  • the image processor 102 may be implemented at least in part using a given one of the processing devices 106.
  • a computer or mobile phone may be configured to incorporate the image processor 102 and possibly a given image source.
  • Image sources utilized to provide input images 11 1 in the image processing system 100 may therefore comprise cameras or other imagers associated with a computer, mobile phone or other processing device.
  • the image processor 102 may be at least partially combined with one or more image sources or image destinations on a common processing device.
  • the image processor 102 in the present embodiment is assumed to be implemented using at least one processing device and comprises a processor 120 coupled to a memory 122.
  • the processor 120 executes software code stored in the memory 122 in order to control the performance of image processing operations.
  • the image processor 102 also comprises a network interface 124 that supports communication over network 104.
  • the network interface 124 may comprise one or more conventional transceivers. In other embodiments, the image processor 102 need not be configured for communication with other devices over a network, and in such embodiments the network interface 124 may be eliminated.
  • the processor 120 may comprise, for example, a microprocessor, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor (DSP), or other similar processing device component, as well as other types and arrangements of image processing circuitry, in any combination.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • CPU central processing unit
  • ALU arithmetic logic unit
  • DSP digital signal processor
  • the memory 122 stores software code for execution by the processor 120 in implementing portions of the functionality of image processor 102, such as the subsystems 108 and 1 16 and the GR applications 1 18.
  • a given such memory that stores software code for execution by a corresponding processor is an example of what is more generally referred to herein as a computer-readable medium or other type of computer program product having computer program code embodied therein, and may comprise, for example, electronic memory such as random access memory (RAM) or read-only memory (ROM), magnetic memory, optical memory, or other types of storage devices in any combination.
  • the processor may comprise portions or combinations of a microprocessor, ASIC, FPGA, CPU, ALU, DSP or other image processing circuitry.
  • embodiments of the invention may be implemented in the form of integrated circuits.
  • identical die are typically formed in a repeated pattern on a surface of a semiconductor wafer.
  • Each die includes an image processor or other image processing circuitry as described herein, and may include other structures or circuits.
  • the individual die are cut or diced from the wafer, then packaged as an integrated circuit.
  • One skilled in the art would know how to dice wafers and package die to produce integrated circuits. Integrated circuits so manufactured are considered embodiments of the invention.
  • image processing system 100 as shown in FIG. 1 is exemplary only, and the system 100 in other embodiments may include other elements in addition to or in place of those specifically shown, including one or more elements of a type commonly found in a conventional implementation of such a system.
  • the image processing system 100 is implemented as a video gaming system or other type of gesture-based system that processes image streams in order to recognize user gestures.
  • the disclosed techniques can be similarly adapted for use in a wide variety of other systems requiring a gesture-based human-machine interface, and can also be applied to other applications, such as machine vision systems in robotics and other industrial applications that utilize gesture recognition.
  • embodiments of the invention are not limited to use in recognition of hand gestures, but can be applied to other types of gestures as well.
  • the term "gesture” as used herein is therefore intended to be broadly construed.
  • the input images 1 1 1 received in the image processor 102 from an image source comprise input depth images each referred to as an input frame.
  • this source may comprise a depth imager such as an SL or ToF camera comprising a depth image sensor.
  • Other types of image sensors including, for example, grayscale image sensors, color image sensors or infrared image sensors, may be used in other embodiments.
  • a given image sensor typically provides image data in the form of one or more rectangular matrices of real or integer numbers corresponding to respective input image pixels. These matrices can contain per-pixel information such as depth values and corresponding amplitude or intensity values. Other per-pixel information such as color, phase and validity may additionally or alternatively be provided.
  • the GR system 1 10 is configured to receive raw image data from an image sensor 200 and includes a preprocessing subsystem 202, a background estimation and removal subsystem 204, recognition subsystem 108 and an application 1 18-1.
  • the image sensor 200 in this embodiment is assumed to comprise a variable frame rate image sensor, such as a ToF image sensor configured to operate at a variable frame rate. Other types of sources supporting variable frame rates can be used in other embodiments.
  • the preprocessing subsystem 202 is illustratively configured to perform filtering or other noise reduction operations on the raw image data received from the image sensor 200 in order to produce a filtered image for application to the background estimation and removal subsystem 204.
  • Any of a wide variety of image noise reduction techniques can be utilized in the subsystem 202.
  • suitable techniques are described in PCT International Application PCT/US 13/56937, filed on August 28, 2013 and entitled "Image Processor With Edge-Preserving Noise Suppression Functionality," which is commonly assigned herewith and incorporated by reference herein.
  • the subsystem 204 estimates and removes the image background to produce an image without background that is applied to the recognition subsystem 108.
  • various techniques can be used for this purpose including, for example, techniques described in Russian Patent Application No. 2013135506, filed July 29, 2013 and entitled “Image Processor Configured for Efficient Estimation and Elimination of Background Information in Images,” which is commonly assigned herewith and incorporated by reference herein.
  • the recognition subsystem 108 recognizes within the image a gesture from a specified gesture vocabulary and generates a corresponding gesture pattern identifier (ID) and possibly additional related parameters for delivery to the application 1 18-1.
  • ID gesture pattern identifier
  • the configuration of such information is adapted in accordance with the specific needs of the application.
  • the application may be configured to translate the identified gesture to a command or set of commands.
  • FIG. 3 illustrates an embodiment 300 of recognition subsystem 108 that does not include cursor and dynamic gesture detectors 1 13 and finite state machine 1 15.
  • the static pose recognition module 1 14 directly processes an input image to detect one of a plurality of predefined static poses.
  • the predefined static poses can be separated into three groups as follows:
  • Cursor poses including pointing finger or "fingergun” poses for short range applications, and pointing hand or other arm or body poses for long range applications.
  • Poses used for defining dynamic gestures For example, palm poses may be used to define swipe gestures.
  • Groups 2 and 3 above may intersect, but the gesture vocabulary of the GR system 110 is typically configured to avoid such intersection.
  • the cursor is considered a particular type of gesture used to indicate cursor position in the GR system. Accordingly, a cursor may also be referred to herein as a cursor gesture.
  • a dynamic gesture typically comprises a combination of one or more static poses and some associated movement.
  • dynamic hand gestures include a swipe left gesture, a swipe right gesture, a swipe up gesture, a swipe down gesture, a poke gesture and a wave gesture, although various subsets of these dynamic gestures as well as additional or alternative dynamic gestures may be supported in other embodiments. Accordingly, embodiments of the invention are not limited to use with any particular gesture vocabulary.
  • the one or more static poses and associated movement of a given dynamic gesture comprise respective static poses and associated movement of the arm or body.
  • the static pose recognition module 114 is configured to identify a particular pose in the input image.
  • the pose may be a cursor pose, a dynamic gesture pose, or a pose defined as a static gesture.
  • the output of the static pose recognition module 114 for a given input image in this embodiment comprises a static pose pattern ID, which identifies a particular pose.
  • the output may additionally include static pose parameters generated by the static pose recognition module 1 14.
  • the cursor location and tracking block 302 is illustratively configured to determine coordinates of a cursor point within the image and to apply appropriate noise reduction filters, which may involve averaging cursor coordinates within a specified time period.
  • decision block 306 determines if the identified pose is a dynamic gesture pose, and if the pose is a dynamic gesture pose, dynamic gesture recognition block 304 is applied to generate a dynamic gesture pattern ID that is provided to application 1 18-1, possibly in conjunction with parameters determined by optional dynamic gesture parameters evaluation block 308.
  • the parameters evaluation block 308 may be configured to include extended noise reduction filters in order to calculate a zoom factor parameter of a zoom gesture.
  • the dynamic gesture recognition block 304 calculates velocities of one or more parts of the image, based on movement of those parts over a specified period of time relative to their respective positions in one or more previous images of an image sequence. The calculated velocities are utilized in block 304 in combination with the static pose pattern ID and any associated parameters provided by the static pose recognition module 1 14 to recognize a particular gesture.
  • the identified pose is not a cursor pose or a dynamic gesture pose
  • the identified pose is assumed to be a pose defined as a static gesture, and the static pose pattern ID is provided to application 1 18-1 , possibly in conjunction with parameters determined by optional static pose parameters evaluation block 3 10.
  • the parameters evaluation blocks 308 and 310 may be incorporated at least in part within the respective dynamic recognition block 304 and static pose recognition module 1 14. Such arrangements may be utilized, for example, if the associated parameters are part of a feature vector for a Gaussian Mixture Model (GMM) implemented in the recognition block or module.
  • GMM Gaussian Mixture Model
  • the static pose recognition module 1 14 performs relatively complex and time-consuming operations as compared to other portions of the GR system 110 such as cursor location and tracking block 302 and dynamic gesture recognition block 304.
  • the static pose recognition module 1 14 may be configured to perform operations such as additional background evaluation and removal, region of interest (ROI) detection, morphological image processing, affine transformations such as shifting, rotating and zooming, and expectation maximization for GMMs.
  • ROI region of interest
  • affine transformations such as shifting, rotating and zooming
  • expectation maximization for GMMs expectation maximization for GMMs.
  • the static pose recognition module 114 when arranged with other system components as shown in FIG. 3 can create a significant bottleneck for the overall GR system 1 10. Such a bottleneck can make it difficult to achieve desired levels of recognition precision, particularly when processing an image stream from an image sensor in real time at high frame rates.
  • FIG. 4 illustrates an embodiment 400 of recognition subsystem 108 that includes cursor and dynamic gesture detectors 1 13 and finite state machine 1 15.
  • the cursor detector and dynamic gesture detector are more specifically denoted in this embodiment by respective reference numerals 1 13 A and 1 13B, and are illustratively shown as being implemented within the finite state machine or FSM 1 15.
  • This embodiment also includes static pose recognition module 1 14, cursor location and tracking block 302, dynamic gesture recognition block 304, optional parameters evaluation blocks 308 and 310, and application 1 18-1.
  • This embodiment is an example of an arrangement in which the finite state machine 115 is configured to control selective enabling of the cursor detector 1 13A, the dynamic gesture detector 113B and the static pose recognition module 114.
  • the finite state machine 1 15 may be configured such that only one of the cursor detector 113A, dynamic gesture detector 113B and static pose recognition module 114 is enabled at a time.
  • Other types of selective enabling of these components using different finite state machines may be used in other embodiments. Accordingly, the term "selective enabling" as used herein is intended to be broadly construed.
  • the finite state machine 1 15 in the present embodiment is illustratively configured to have a plurality of states including a cursor detected state in which the cursor location and tracking block 302 is applied responsive to detection of a cursor in a current frame, a dynamic gesture detected state in which dynamic gesture recognition block 304 is applied responsive to detection of a dynamic gesture in the current frame, and a static pose recognition state in which static pose recognition module 1 14 is applied responsive to failure to detect a cursor or a dynamic gesture in the current frame.
  • An initial state of the finite state machine 115 for the current frame is given by a final state of the finite state machine for a previous frame.
  • the final state of the finite state machine for the current frame is utilized as an initial state of the finite state machine for a subsequent frame.
  • a final state of the finite state machine for a given frame is determined as a function of outputs of respective ones of the cursor detector 1 13A, dynamic gesture detector 1 13B and static pose recognition module 1 14 for that frame, as will be described in more detail below in conjunction with FIG. 6.
  • the embodiment of FIG. 4 is advantageously configured to eliminate the above- described potential bottleneck that can arise when the static pose recognition module 1 14 is arranged as shown in FIG. 3. More particularly, in the FIG. 4 embodiment, the finite state machine 1 15 controls selective enabling of the cursor detector 1 13A, dynamic gesture detector 1 13B and static pose recognition module 1 14 in a manner that allows the cursor detector 1 13A and the dynamic gesture detector 1 13B to operate at a higher frame rate than the static pose recognition module 1 14. As part of this exemplary selective enabling, the finite state machine can adjust a frame rate of operation of the recognition subsystem 108 of GR system 110 responsive to outputs of the cursor detector 113A and the dynamic gesture detector 1 13B. This facilitates the processing of an image stream in real time at high frame rates, allowing higher levels of recognition precision to be achieved relative to the FIG. 3 embodiment.
  • the FIG. 4 embodiment allows a cursor and dynamic gestures to be recognized and evaluated using relatively short computation times and therefore relatively high frame rates, on the order of 90 frames per second or more, while static gestures are recognized and evaluated using relatively long computation times and therefore relatively low frame rates, on the order of about 30 frames per second.
  • use of such variable frame rates is supported by an image sensor that can operate at variable frame rates, such as the ToF image sensor assumed for the present embodiment.
  • the finite state machine 1 15 controls the cursor detector 1 13A, dynamic gesture detector 113B and static pose recognition module 114 such that higher frame rates are provided for more time-critical tasks such as those performed in cursor location and tracking block 302 and dynamic gesture recognition block 304, while lower frame rates are provided for less time-critical tasks such as those performed by static pose recognition module 1 14.
  • the frame rate is dynamically varied at runtime depending upon whether the current frame is determined to contain a cursor, a dynamic gesture or a static gesture.
  • the dynamic variation of the frame rate at runtime can be achieved in the recognition subsystem 108 of GR system 1 10 by acquiring the next frame immediately when the current frame has been processed, rather than acquiring input frames at a fixed rate.
  • FIG. 4 embodiment permits faster processing of a current frame and faster acquisition of a subsequent frame upon detection of a cursor or a dynamic gesture in the current frame.
  • the image sensor supplying input images to the image processor 102 does not support a variable frame rate, dynamic variation of the frame rate can still be achieved in the GR system 1 10 by, for example, skipping one or more input frames in order to emulate variable frame rate image sensor functionality.
  • cursor detector 1 13 A, dynamic gesture detector 1 13B and static pose recognition module 1 14 each operate at a different frame rate. Additionally, other embodiments can be configured such that all three of these components operate at the same frame rate.
  • the recognition subsystem 108 in the FIG. 4 embodiment may be viewed as being separated into distinct portions for detection and processing of cursors, dynamic gestures and static gestures, respectively. Different combinations of hardware, software and firmware can be used for each of these portions.
  • the finite state machine 1 15 in the present embodiment may be viewed as controlling selective enabling of the portions such only one of the portions is enabled at a time.
  • references herein to selective enabling of cursor detector 1 13A, dynamic gesture detector 113B and static pose recognition module 1 14 should be broadly construed so as to encompass in some embodiments selective enabling of respective associated elements such as curser location and tracking block 302 for cursor detector 1 13 A, dynamic gesture recognition block 304 and dynamic gesture parameters evaluation block 308 for dynamic gesture detector 1 13B, and static pose parameters evaluation block 310 for static pose recognition module 114.
  • the cursor detector 1 13 A is configured to detect the presence of a cursor pose within the current frame.
  • a cursor pose may comprise a pointing finger pose or fingergun pose for short range applications, and pointing hand or other arm or body poses for long range applications.
  • the cursor detector combines all other non-cursor poses into a single recognition class, illustratively denoted as an "other pose" class, which significantly reduces the number of classes from the eight or more used for respective static poses in a typical gesture vocabulary to two or three classes. Such an arrangement allows the use of efficient and time-saving recognition algorithms without affecting the recognition quality.
  • the cursor detector 1 13A can be implemented using relatively simple threshold logic by calculating the size of the hand nearest to a controlled device and comparing the calculated size to a specified threshold. If the hand size is below the threshold, it is recognized as a pointing finger or pointing hand, and the pose is recognized as a cursor pose. Numerous other implementations of the cursor detector module are possible.
  • the dynamic gesture detector 1 13B is configured to detect the presence of a dynamic gesture pose within the current frame.
  • all static poses that are not used to define dynamic gestures can be combined into a single recognition class in order to simplify the dynamic gesture detector.
  • the dynamic gesture detector can be configured to operate using four classes of static poses, namely, a palm class used for swipe gestures, a palm with fingers class, a palm with pinch class used for zoom gestures, and the "other pose" class.
  • One possible implementation of the dynamic gesture detector in the present embodiment also utilizes relatively simple threshold logic by calculating velocities for parts of the image and comparing the calculated velocities to respective specified thresholds. If the calculated velocities exceed the thresholds, significant motion is detected and the detector determines that the gesture in the current frame is not static. This example assumes that the definition of a static gesture includes no significant motion.
  • the dynamic gesture detector 113B may also be configured to perform dynamic gesture recognition. Accordingly, in these embodiments, the separate dynamic gesture recognition block can be eliminated.
  • parameters computed by the cursor detector 113A or dynamic gesture detector 1 13B may be provided to the respective cursor location and tracking block 302 and dynamic gesture recognition block 304.
  • parameters such as finger coordinates and velocity computed by the cursor detector may be provided to the cursor location and tracking block 302 for application of averaging or other noise reduction operations.
  • some of the parameters computed by the cursor detector can be provided to the dynamic gesture detector, and vice versa.
  • an ROI mass center velocity computed by one of the detectors 1 13 may be re-used by the other.
  • Recognition subsystem components such as static pose recognition module 114, cursor location and tracking block 302, dynamic gesture recognition block 304 and parameters evaluation blocks 308 and 310 may be configured differently in the FIG. 4 embodiment than in the FIG. 3 embodiment, depending upon what parameters are computed by prior blocks or shared between blocks in the FIG. 4 embodiment.
  • the cursor detector 113A, dynamic gesture detector 1 13B and static pose recognition module 114 have associated therewith respective decision blocks 412, 414 and 415 which determine whether or not the corresponding cursor, dynamic gesture or static pose have been detected in the current frame.
  • the decision blocks 412, 414 and 415 although shown in the figure as being separate from the respective cursor detector 1 13A, dynamic gesture detector 1 13B and static pose recognition module 1 14, can in other embodiments be incorporated within those respective elements.
  • the recognition subsystem 108 implements real time gesture recognition using a variable frame rate depending on the current state of the finite state machine 1 15 and the outputs of the decision blocks 412, 414 and 415. Additional decision blocks in the FIG. 4 embodiment include decision blocks 416, 417 and 418.
  • static pose recognition module 114 when enabled generates a static pose pattern ID and optionally one or more associated parameters
  • dynamic gesture recognition block 304 when enabled generates a dynamic gesture pattern ID
  • parameters evaluation block 308 when enabled generates parameters associated with the dynamic gesture pattern ID
  • parameters evaluation block 310 when enabled generates additional parameters associated with the static pose pattern ID.
  • an affirmative output from decision block 412 or decision block 414 will lead to application of respective cursor location and tracking block 302 or dynamic gesture recognition block 304.
  • Negative outputs from the decision blocks 412 and 414 are not explicitly shown in FIG. 4, but are processed in the manner indicated in FIG. 5.
  • An affirmative output from decision block 415 will lead to decision block 416, which directs the process to the cursor location and tracking block 302 if the recognized static pose is a cursor pose, and otherwise directs the process to static pose parameters evaluation block 310. It is therefore possible for the static pose recognition module 114 to detect a cursor pose even if the cursor detector 113A did not detect a cursor pose in its initial detection iteration, due to additional image enhancements performed in the course of static pose recognition.
  • a negative output from decision block 415 will lead to decision block 417, which directs the process to the cursor location and tracking block 302 if the finite state machine 1 15 is still in a cursor detected state from a previous frame, and otherwise directs the process to decision block 418.
  • An affirmative output from decision block 418 indicates that the finite state machine 1 15 is still in a dynamic gesture detected state from a previous frame, and the process is directed to the dynamic gesture recognition block 304.
  • a negative output from decision block 418 indicates that no gesture has been detected in the current frame and this information is provided to application 118-1.
  • the decision blocks 417 and 418 are therefore configured such that if no static pose is detected by the static pose recognition module 1 14, and the finite state machine is in either its cursor detected or dynamic gesture detected state, the decision is made using the finite state machine state. This additional correction significantly decreases the misdetection rate of the G system.
  • FIG. 5 shows a more detailed view of the control functionality provided by finite state machine 1 15 in relation to cursor detector 1 13A and its associated blocks 412 and 302, dynamic gesture detector 1 13B and its associated blocks 414 and 304, and static pose recognition module 1 14. Additional decision blocks 500 and 502 are shown in FIG. 5 and are assumed to be present in the embodiment 400 but are omitted from FIG. 4 for simplicity and clarity of illustration.
  • decision block 500 determines that an initial state of the finite state machine 1 15 for a current frame is a dynamic gesture detected state, based on a determination made for a previous frame, the dynamic gesture detector 113B is initially enabled for the current frame. However, if decision block 500 determines that the initial state of the finite state machine for the current frame is not a dynamic gesture detected state, the cursor detector 1 13A is initially enabled for the current frame.
  • either the cursor detector 113A or the dynamic gesture detector 1 13B is activated first for the current frame. If a dynamic gesture was detected in the previous frame, the finite state machine will initially be in the dynamic gesture detected state in the current frame, and the dynamic gesture detector is enabled first in the current frame. Otherwise, the cursor detector is enabled first in the current frame. Assuming by way of example that the cursor detector 113A is initially enabled, decision block 412 indicates whether or not the cursor detector detects a cursor in the current frame. If a cursor is detected by the cursor detector for the current frame, cursor location and tracking block 302 is applied using a cursor gesture pattern ID provided by the cursor detector 1 13A. If a cursor is not detected by the cursor detector for the current frame, the finite state machine 1 15 enables the dynamic gesture detector 1 13B for the current frame.
  • dynamic gesture recognition block 304 is applied. If a dynamic gesture is not detected by the dynamic gesture detector for the current frame, and the finite state machine 115 is still in a dynamic gesture detected state from a previous frame, the finite state machine enables the cursor detector 1 13A for the current frame. Processing then continues through decision block 412 as previously described. If a dynamic gesture is not detected by the dynamic gesture detector, and if the decision block 502 indicates that the finite state machine is not in a dynamic gesture detected state, the finite state machine enables the static pose recognition module 114 for the current frame.
  • the finite state machine control is configured such that the static pose recognition module 1 14 is enabled for the current frame only if a cursor is not detected by the cursor detector 1 13A and a dynamic gesture is not detected by the dynamic gesture detector 1 13B.
  • the static pose recognition module 1 14 is enabled for the current frame only if a cursor is not detected by the cursor detector 1 13A and a dynamic gesture is not detected by the dynamic gesture detector 1 13B.
  • other types of finite state machine control can be provided in other embodiments.
  • FIG. 6 illustrates the manner in which the state of the finite state machine 1 15 is updated in conjunction with completion of the recognition processing for the current frame. More particularly, in this exemplary state update module, the outputs of the cursor detector 1 13 A, dynamic gesture detector 1 13B and static pose recognition module 1 14 are applied to a maximization element 600, the output of which is used to determine a new state 602 for the finite state machine.
  • the outputs of the respective cursor detector, dynamic gesture detector and static pose recognition module comprise the respective cursor gesture pattern ID, dynamic gesture pattern ID and static pose pattern ID if any such IDs were detected. If one or more of the cursor detector, dynamic gesture detector and static pose recognition module were not enabled under control of the finite state machine in the current frame, or if enabled in the current frame did not result in an affirmative detection decision, its output is a zero as indicated in the figure. It is assumed that the finite state machine control in the present embodiment ensures that only one of the cursor detector, dynamic gesture detector and static pose recognition module will generate an affirmative detection decision in the current frame.
  • the maximization element 600 will determine the new state 602 for the finite state machine as one of the cursor detected state, the dynamic gesture detected state or the static pose recognition state, based on which of the corresponding pattern ID outputs was nonzero for the current frame.
  • This new state 602 becomes the final state for the finite state machine in the current frame, and as indicated previously also serves as the initial state of the finite state machine for the next frame.
  • FIGS. 2 through 6 are exemplary only, and additional or alternative blocks can be used in other embodiments.
  • blocks illustratively shown as being executed serially in the figures can be performed at least in part in parallel with one or more other blocks or in other pipelined configurations in other embodiments.
  • the illustrative embodiments provide significantly improved gesture recognition performance relative to conventional arrangements.
  • these embodiments can support higher frame rates than would otherwise be possible by substantially reducing the amount of processing time required when cursors or dynamic gestures are detected.
  • the GR system performance is accelerated while ensuring high precision in the recognition process.
  • the disclosed techniques can be applied to a wide range of different GR systems, using depth, grayscale, color, infrared and other types of imagers which support a variable frame rate, as well as imagers which do not support a variable frame rate, and in both short range applications using hand gestures and long range application using arm or body gestures.
  • Different portions of the GR system 1 10 can be implemented in software, hardware, firmware or various combinations thereof.
  • software utilizing hardware accelerators may be used for some processing blocks while other blocks are implemented using combinations of hardware and firmware.
  • At least portions of the GR-based output 1 12 of GR system 1 10 may be further processed in the image processor 102, or supplied to another processing device 106 or image destination, as mentioned previously.

Abstract

An image processing system comprises an image processor having image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system. The gesture recognition system comprises a cursor detector, a dynamic gesture detector, a static pose recognition module, and a finite state machine configured to control selectively enabling of the cursor detector, the dynamic gesture detector and the static pose recognition module. By way of example, the finite state machine includes a cursor detected state in which cursor location and tracking are applied responsive to detection of a cursor in a current frame, a dynamic gesture detected state in which dynamic gesture recognition is applied responsive to detection of a dynamic gesture in the current frame, and a static pose recognition state in which static pose recognition is applied responsive to failure to detect a cursor or a dynamic gesture in the current frame.

Description

FINITE STATE MACHINE CURSOR AND DYNAMIC GESTURE DETECTOR RECOGNITION
Field
The field relates generally to image processing, and more particularly to image processing for recognition of gestures.
Background
Image processing is important in a wide variety of different applications, and such processing may involve two-dimensional (2D) images, three-dimensional (3D) images, or combinations of multiple images of different types. For example, a 3D image of a spatial scene may be generated in an image processor using triangulation based on multiple 2D images captured by respective cameras arranged such that each camera has a different view of the scene. Alternatively, a 3D image can be generated directly using a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera. These and other 3D images, which are also referred to herein as depth images, are commonly utilized in machine vision applications, including those involving gesture recognition.
In a typical gesture recognition arrangement, raw image data from an image sensor is usually subject to various preprocessing operations. The preprocessed image data is then subject to additional processing used to recognize gestures in the context of particular gesture recognition applications. Such applications may be implemented, for example, in video gaming systems, kiosks or other systems providing a gesture-based user interface. These other systems include various electronic consumer devices such as laptop computers, tablet computers, desktop computers, mobile phones and television sets.
Summary
In one embodiment, an image processing system comprises an image processor having image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system utilizing the image processing circuitry and the memory, with the gesture recognition system comprising a cursor detector, a dynamic gesture detector, a static pose recognition module, and a finite state machine configured to control selective enabling of the cursor detector, the dynamic gesture detector and the static pose recognition module.
By way of example only, the finite state machine has a plurality of states including a cursor detected state in which cursor location and tracking are applied responsive to detection of a cursor in a current frame, a dynamic gesture detected state in which dynamic gesture recognition is applied responsive to detection of a dynamic gesture in the current frame, and a static pose recognition state in which static pose recognition is applied responsive to failure to detect a cursor or a dynamic gesture in the current frame.
Other embodiments of the invention include but are not limited to methods, apparatus, systems, processing devices, integrated circuits, and computer-readable storage media having computer program code embodied therein.
Brief Description of the Drawings
FIG. 1 is a block diagram of an image processing system comprising an image processor implementing a gesture recognition process in an illustrative embodiment.
FIG. 2 shows a more detailed view of an exemplary gesture recognition system of the image processor of FIG. 1.
FIG. 3 illustrates an embodiment of a recognition subsystem of the gesture recognition system of FIG. 2 without a finite state machine and cursor and dynamic gesture detectors.
FIG. 4 illustrates an embodiment of a recognition subsystem of the gesture recognition system of FIG. 2 with a finite state machine and cursor and dynamic gesture detectors.
FIG. 5 shows a more detailed view of portions of the recognition subsystem in the FIG. 4 embodiment.
FIG. 6 shows an exemplary state update module for the finite state machine of the recognition subsystem in the FIG. 4 embodiment.
Detailed Description
Embodiments of the invention will be illustrated herein in conjunction with exemplary image processing systems that include image processors or other types of processing devices configured to perform gesture recognition. It should be understood, however, that embodiments of the invention are more generally applicable to any image processing system or associated device or technique that involves recognizing gestures in one or more images.
FIG. 1 shows an image processing system 100 in an embodiment of the invention. The image processing system 100 comprises an image processor 102 that is configured for communication over a network 104 with a plurality of processing devices 106-1, 106-2, , . . 106-M. The image processor 102 implements a recognition subsystem 108 within a gesture recognition (GR) system 1 10. The GR system 1 10 in this embodiment processes input images 11 1 from one or more image sources and provides corresponding GR-based output 1 12. The GR-based output 1 12 may be supplied to one or more of the processing devices 106 or to other system components not specifically illustrated in this diagram.
The recognition subsystem 108 of GR system 110 more particularly comprises cursor and dynamic gesture detectors 1 13, a static pose recognition module 1 14, and a finite state machine 115 configured to control selective enabling of the cursor detector, the dynamic gesture detector and the static pose recognition module. The operation of illustrative embodiments of the GR system 1 10 of image processor 102 will be described in greater detail below in conjunction with FIGS. 2 through 6.
The recognition subsystem 108 receives inputs from additional subsystems 1 16, which may comprise one or more image processing subsystems configured to implement functional blocks associated with gesture recognition in the GR system 110, such as, for example, functional blocks for input frame acquisition, noise reduction or other types of preprocessing, and background estimation and removal. It should be understood, however, that these particular functional blocks are exemplary only, and other embodiments of the invention can be configured using other arrangements of additional or alternative functional blocks.
In the FIG. 1 embodiment, the recognition subsystem 108 generates GR events for consumption by one or more of a set of GR applications 118. For example, the GR events may comprise information indicative of recognition of one or more particular gestures within one or more frames of the input images 1 1 1, such that a given GR application in the set of GR applications 118 can translate that information into a particular command or set of commands to be executed by that application.
Additionally or alternatively, the GR system 1 10 may provide GR events or other information, possibly generated by one or more of the GR applications 118, as GR-based output 112. Such output may be provided to one or more of the processing devices 106. In other embodiments, at least a portion of set of GR applications 1 18 is implemented at least in part on one or more of the processing devices 106.
Portions of the GR system 1 10 may be implemented using separate processing layers of the image processor 102. These processing layers comprise at least a portion of what is more generally referred to herein as "image processing circuitry" of the image processor 102. For example, the image processor 102 may comprise a preprocessing layer implementing a preprocessing module and a plurality of higher processing layers for performing other functions associated with recognition of gestures within frames of an input image stream comprising the input images 1 1 1. Such processing layers may also be implemented in the form of respective subsystems of the GR system 1 10. It should be noted, however, that embodiments of the invention are not limited to recognition of static or dynamic hand gestures, but can instead be adapted for use in a wide variety of other machine vision applications involving gesture recognition, and may comprise different numbers, types and arrangements of modules, subsystems, processing layers and associated functional blocks.
Also, certain processing operations associated with the image processor 102 in the present embodiment may instead be implemented at least in part on other devices in other embodiments. For example, preprocessing operations may be implemented at least in part in an image source comprising a depth imager or other type of imager that provides at least a portion of the input images 1 11. It is also possible that one or more of the applications 1 18 may be implemented on a different processing device than the subsystems 108 and 1 16, such as one of the processing devices 106.
Moreover, it is to be appreciated that the image processor 102 may itself comprise multiple distinct processing devices, such that different portions of the GR system 1 10 are implemented using two or more processing devices. The term "image processor" as used herein is intended to be broadly construed so as to encompass these and other arrangements.
The GR system 1 10 performs preprocessing operations on received input images 111 from one or more image sources. This received image data in the present embodiment is assumed to comprise raw image data received from a depth sensor, but other types of received image data may be processed in other embodiments. Such preprocessing operations may include noise reduction and background removal.
The raw image data received by the GR system 1 10 from the depth sensor may include a stream of frames comprising respective depth images, with each such depth image comprising a plurality of depth image pixels. For example, a given depth image D may be provided to the GR system 1 10 in the form of matrix of real values. A given such depth image is also referred to herein as a depth map.
A wide variety of other types of images or combinations of multiple images may be used in other embodiments. It should therefore be understood that the term "image" as used herein is intended to be broadly construed.
The image processor 102 may interface with a variety of different image sources and image destinations. For example, the image processor 102 may receive input images 1 1 1 from one or more image sources and provide processed images as part of GR-based output 112 to one or more image destinations. At least a subset of such image sources and image destinations may be implemented as least in part utilizing one or more of the processing devices 106. Accordingly, at least a subset of the input images 1 1 1 may be provided to the image processor 102 over network 104 for processing from one or more of the processing devices 106. Similarly, processed images or other related GR-based output 1 12 may be delivered by the image processor 102 over network 104 to one or more of the processing devices 106. Such processing devices may therefore be viewed as examples of image sources or image destinations as those terms are used herein.
A given image source may comprise, for example, a 3D imager such as an SL camera or a ToF camera configured to generate depth images, or a 2D imager configured to generate grayscale images, color images, infrared images or other types of 2D images. It is also possible that a single imager or other image source can provide both a depth image and a corresponding 2D image such as a grayscale image, a color image or an infrared image. For example, certain types of existing 3D cameras are able to produce a depth map of a given scene as well as a 2D image of the same scene. Alternatively, a 3D imager providing a depth map of a given scene can be arranged in proximity to a separate high-resolution video camera or other 2D imager providing a 2D image of substantially the same scene.
Another example of an image source is a storage device or server that provides images to the image processor 102 for processing.
A given image destination may comprise, for example, one or more display screens of a human-machine interface of a computer or mobile phone, or at least one storage device or server that receives processed images from the image processor 102.
It should also be noted that the image processor 102 may be at least partially combined with at least a subset of the one or more image sources and the one or more image destinations on a common processing device. Thus, for example, a given image source and the image processor 102 may be collectively implemented on the same processing device. Similarly, a given image destination and the image processor 102 may be collectively implemented on the same processing device.
In the present embodiment, the image processor 102 is configured to recognize hand gestures, although the disclosed techniques can be adapted in a straightforward manner for use with other types of gesture recognition processes.
As noted above, the input images 1 1 1 may comprise respective depth images generated by a depth imager such as an SL camera or a ToF camera. Other types and arrangements of images may be received, processed and generated in other embodiments, including 2D images or combinations of 2D and 3D images. The particular arrangement of subsystems, applications and other components shown in image processor 102 in the FIG. 1 embodiment can be varied in other embodiments. For example, an otherwise conventional image processing integrated circuit or other type of image processing circuitry suitably modified to perform processing operations as disclosed herein may be used to implement at least a portion of one or more of the components 113, 1 14, 1 15, 116 and 1 18 of image processor 102. One possible example of image processing circuitry that may be used in one or more embodiments of the invention is an otherwise conventional graphics processor suitably reconfigured to perform functionality associated with one or more of the components 1 13, 1 14, 1 15, 1 16 and 1 18.
The processing devices 106 may comprise, for example, computers, mobile phones, servers or storage devices, in any combination. One or more such devices also may include, for example, display screens or other user interfaces that are utilized to present images generated by the image processor 102, The processing devices 106 may therefore comprise a wide variety of different destination devices that receive processed image streams or other types of GR-based output 112 from the image processor 102 over the network 104, including by way of example at least one server or storage device that receives one or more processed image streams from the image processor 102.
Although shown as being separate from the processing devices 106 in the present embodiment, the image processor 102 may be at least partially combined with one or more of the processing devices 106. Thus, for example, the image processor 102 may be implemented at least in part using a given one of the processing devices 106. As a more particular example, a computer or mobile phone may be configured to incorporate the image processor 102 and possibly a given image source. Image sources utilized to provide input images 11 1 in the image processing system 100 may therefore comprise cameras or other imagers associated with a computer, mobile phone or other processing device. As indicated previously, the image processor 102 may be at least partially combined with one or more image sources or image destinations on a common processing device.
The image processor 102 in the present embodiment is assumed to be implemented using at least one processing device and comprises a processor 120 coupled to a memory 122. The processor 120 executes software code stored in the memory 122 in order to control the performance of image processing operations. The image processor 102 also comprises a network interface 124 that supports communication over network 104. The network interface 124 may comprise one or more conventional transceivers. In other embodiments, the image processor 102 need not be configured for communication with other devices over a network, and in such embodiments the network interface 124 may be eliminated.
The processor 120 may comprise, for example, a microprocessor, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), an arithmetic logic unit (ALU), a digital signal processor (DSP), or other similar processing device component, as well as other types and arrangements of image processing circuitry, in any combination.
The memory 122 stores software code for execution by the processor 120 in implementing portions of the functionality of image processor 102, such as the subsystems 108 and 1 16 and the GR applications 1 18. A given such memory that stores software code for execution by a corresponding processor is an example of what is more generally referred to herein as a computer-readable medium or other type of computer program product having computer program code embodied therein, and may comprise, for example, electronic memory such as random access memory (RAM) or read-only memory (ROM), magnetic memory, optical memory, or other types of storage devices in any combination. As indicated above, the processor may comprise portions or combinations of a microprocessor, ASIC, FPGA, CPU, ALU, DSP or other image processing circuitry.
It should also be appreciated that embodiments of the invention may be implemented in the form of integrated circuits. In a given such integrated circuit implementation, identical die are typically formed in a repeated pattern on a surface of a semiconductor wafer. Each die includes an image processor or other image processing circuitry as described herein, and may include other structures or circuits. The individual die are cut or diced from the wafer, then packaged as an integrated circuit. One skilled in the art would know how to dice wafers and package die to produce integrated circuits. Integrated circuits so manufactured are considered embodiments of the invention.
The particular configuration of image processing system 100 as shown in FIG. 1 is exemplary only, and the system 100 in other embodiments may include other elements in addition to or in place of those specifically shown, including one or more elements of a type commonly found in a conventional implementation of such a system.
For example, in some embodiments, the image processing system 100 is implemented as a video gaming system or other type of gesture-based system that processes image streams in order to recognize user gestures. The disclosed techniques can be similarly adapted for use in a wide variety of other systems requiring a gesture-based human-machine interface, and can also be applied to other applications, such as machine vision systems in robotics and other industrial applications that utilize gesture recognition.
Also, as indicated above, embodiments of the invention are not limited to use in recognition of hand gestures, but can be applied to other types of gestures as well. The term "gesture" as used herein is therefore intended to be broadly construed.
The operation of the GR system 1 10 of image processor 102 will now be described in greater detail with reference to the diagrams of FIGS. 2 through 6.
It is assumed in these embodiments that the input images 1 1 1 received in the image processor 102 from an image source comprise input depth images each referred to as an input frame. As indicated above, this source may comprise a depth imager such as an SL or ToF camera comprising a depth image sensor. Other types of image sensors including, for example, grayscale image sensors, color image sensors or infrared image sensors, may be used in other embodiments. A given image sensor typically provides image data in the form of one or more rectangular matrices of real or integer numbers corresponding to respective input image pixels. These matrices can contain per-pixel information such as depth values and corresponding amplitude or intensity values. Other per-pixel information such as color, phase and validity may additionally or alternatively be provided.
Referring now to FIG. 2, an embodiment of the GR system 110 is shown in more detail. In this embodiment, the GR system 1 10 is configured to receive raw image data from an image sensor 200 and includes a preprocessing subsystem 202, a background estimation and removal subsystem 204, recognition subsystem 108 and an application 1 18-1. The image sensor 200 in this embodiment is assumed to comprise a variable frame rate image sensor, such as a ToF image sensor configured to operate at a variable frame rate. Other types of sources supporting variable frame rates can be used in other embodiments.
The preprocessing subsystem 202 is illustratively configured to perform filtering or other noise reduction operations on the raw image data received from the image sensor 200 in order to produce a filtered image for application to the background estimation and removal subsystem 204. Any of a wide variety of image noise reduction techniques can be utilized in the subsystem 202. For example, suitable techniques are described in PCT International Application PCT/US 13/56937, filed on August 28, 2013 and entitled "Image Processor With Edge-Preserving Noise Suppression Functionality," which is commonly assigned herewith and incorporated by reference herein.
The subsystem 204 estimates and removes the image background to produce an image without background that is applied to the recognition subsystem 108. Again, various techniques can be used for this purpose including, for example, techniques described in Russian Patent Application No. 2013135506, filed July 29, 2013 and entitled "Image Processor Configured for Efficient Estimation and Elimination of Background Information in Images," which is commonly assigned herewith and incorporated by reference herein.
The recognition subsystem 108 recognizes within the image a gesture from a specified gesture vocabulary and generates a corresponding gesture pattern identifier (ID) and possibly additional related parameters for delivery to the application 1 18-1. The configuration of such information is adapted in accordance with the specific needs of the application. As noted above, the application may be configured to translate the identified gesture to a command or set of commands.
FIG. 3 illustrates an embodiment 300 of recognition subsystem 108 that does not include cursor and dynamic gesture detectors 1 13 and finite state machine 1 15. In this embodiment, the static pose recognition module 1 14 directly processes an input image to detect one of a plurality of predefined static poses. The predefined static poses can be separated into three groups as follows:
1. Cursor poses, including pointing finger or "fingergun" poses for short range applications, and pointing hand or other arm or body poses for long range applications.
2. Poses used for defining dynamic gestures. For example, palm poses may be used to define swipe gestures.
3. Poses defined as static gestures.
Groups 2 and 3 above may intersect, but the gesture vocabulary of the GR system 110 is typically configured to avoid such intersection. It should be noted that the cursor is considered a particular type of gesture used to indicate cursor position in the GR system. Accordingly, a cursor may also be referred to herein as a cursor gesture.
A dynamic gesture typically comprises a combination of one or more static poses and some associated movement. Examples of dynamic hand gestures include a swipe left gesture, a swipe right gesture, a swipe up gesture, a swipe down gesture, a poke gesture and a wave gesture, although various subsets of these dynamic gestures as well as additional or alternative dynamic gestures may be supported in other embodiments. Accordingly, embodiments of the invention are not limited to use with any particular gesture vocabulary. In the case of arm or body gestures, the one or more static poses and associated movement of a given dynamic gesture comprise respective static poses and associated movement of the arm or body.
In the FIG. 3 embodiment, the static pose recognition module 114 is configured to identify a particular pose in the input image. As indicated above, the pose may be a cursor pose, a dynamic gesture pose, or a pose defined as a static gesture. The output of the static pose recognition module 114 for a given input image in this embodiment comprises a static pose pattern ID, which identifies a particular pose. The output may additionally include static pose parameters generated by the static pose recognition module 1 14.
A determination is then made as to whether or not the static pose pattern ID corresponds to a cursor pose or a dynamic gesture pose in order to control application of cursor location and tracking block 302 or dynamic gesture recognition block 304 as appropriate. More particularly, decision block 305 determines if the pose identified in the input image is a cursor pose, and if the pose is a cursor pose, cursor location and tracking block 302 is applied to generate cursor parameters that are provided to application 1 18-1. The cursor location and tracking block 302 is illustratively configured to determine coordinates of a cursor point within the image and to apply appropriate noise reduction filters, which may involve averaging cursor coordinates within a specified time period.
If the identified pose is not a cursor pose, decision block 306 determines if the identified pose is a dynamic gesture pose, and if the pose is a dynamic gesture pose, dynamic gesture recognition block 304 is applied to generate a dynamic gesture pattern ID that is provided to application 1 18-1, possibly in conjunction with parameters determined by optional dynamic gesture parameters evaluation block 308. By way of example, the parameters evaluation block 308 may be configured to include extended noise reduction filters in order to calculate a zoom factor parameter of a zoom gesture.
The dynamic gesture recognition block 304 calculates velocities of one or more parts of the image, based on movement of those parts over a specified period of time relative to their respective positions in one or more previous images of an image sequence. The calculated velocities are utilized in block 304 in combination with the static pose pattern ID and any associated parameters provided by the static pose recognition module 1 14 to recognize a particular gesture.
If the identified pose is not a cursor pose or a dynamic gesture pose, the identified pose is assumed to be a pose defined as a static gesture, and the static pose pattern ID is provided to application 1 18-1 , possibly in conjunction with parameters determined by optional static pose parameters evaluation block 3 10.
In some implementations of the FIG. 3 embodiment, the parameters evaluation blocks 308 and 310 may be incorporated at least in part within the respective dynamic recognition block 304 and static pose recognition module 1 14. Such arrangements may be utilized, for example, if the associated parameters are part of a feature vector for a Gaussian Mixture Model (GMM) implemented in the recognition block or module.
In the FIG. 3 embodiment, the static pose recognition module 1 14 performs relatively complex and time-consuming operations as compared to other portions of the GR system 110 such as cursor location and tracking block 302 and dynamic gesture recognition block 304. For example, depending on factors such as the noise level, static pose definitions and required recognition precision, the static pose recognition module 1 14 may be configured to perform operations such as additional background evaluation and removal, region of interest (ROI) detection, morphological image processing, affine transformations such as shifting, rotating and zooming, and expectation maximization for GMMs. As a result, the static pose recognition module 114 when arranged with other system components as shown in FIG. 3 can create a significant bottleneck for the overall GR system 1 10. Such a bottleneck can make it difficult to achieve desired levels of recognition precision, particularly when processing an image stream from an image sensor in real time at high frame rates.
FIG. 4 illustrates an embodiment 400 of recognition subsystem 108 that includes cursor and dynamic gesture detectors 1 13 and finite state machine 1 15. The cursor detector and dynamic gesture detector are more specifically denoted in this embodiment by respective reference numerals 1 13 A and 1 13B, and are illustratively shown as being implemented within the finite state machine or FSM 1 15. This embodiment also includes static pose recognition module 1 14, cursor location and tracking block 302, dynamic gesture recognition block 304, optional parameters evaluation blocks 308 and 310, and application 1 18-1.
This embodiment is an example of an arrangement in which the finite state machine 115 is configured to control selective enabling of the cursor detector 1 13A, the dynamic gesture detector 113B and the static pose recognition module 114. As a more particular example, the finite state machine 1 15 may be configured such that only one of the cursor detector 113A, dynamic gesture detector 113B and static pose recognition module 114 is enabled at a time. Other types of selective enabling of these components using different finite state machines may be used in other embodiments. Accordingly, the term "selective enabling" as used herein is intended to be broadly construed.
The finite state machine 1 15 in the present embodiment is illustratively configured to have a plurality of states including a cursor detected state in which the cursor location and tracking block 302 is applied responsive to detection of a cursor in a current frame, a dynamic gesture detected state in which dynamic gesture recognition block 304 is applied responsive to detection of a dynamic gesture in the current frame, and a static pose recognition state in which static pose recognition module 1 14 is applied responsive to failure to detect a cursor or a dynamic gesture in the current frame.
An initial state of the finite state machine 115 for the current frame is given by a final state of the finite state machine for a previous frame. Similarly, the final state of the finite state machine for the current frame is utilized as an initial state of the finite state machine for a subsequent frame. A final state of the finite state machine for a given frame is determined as a function of outputs of respective ones of the cursor detector 1 13A, dynamic gesture detector 1 13B and static pose recognition module 1 14 for that frame, as will be described in more detail below in conjunction with FIG. 6.
The embodiment of FIG. 4 is advantageously configured to eliminate the above- described potential bottleneck that can arise when the static pose recognition module 1 14 is arranged as shown in FIG. 3. More particularly, in the FIG. 4 embodiment, the finite state machine 1 15 controls selective enabling of the cursor detector 1 13A, dynamic gesture detector 1 13B and static pose recognition module 1 14 in a manner that allows the cursor detector 1 13A and the dynamic gesture detector 1 13B to operate at a higher frame rate than the static pose recognition module 1 14. As part of this exemplary selective enabling, the finite state machine can adjust a frame rate of operation of the recognition subsystem 108 of GR system 110 responsive to outputs of the cursor detector 113A and the dynamic gesture detector 1 13B. This facilitates the processing of an image stream in real time at high frame rates, allowing higher levels of recognition precision to be achieved relative to the FIG. 3 embodiment.
For example, the FIG. 4 embodiment allows a cursor and dynamic gestures to be recognized and evaluated using relatively short computation times and therefore relatively high frame rates, on the order of 90 frames per second or more, while static gestures are recognized and evaluated using relatively long computation times and therefore relatively low frame rates, on the order of about 30 frames per second. As mentioned previously, use of such variable frame rates is supported by an image sensor that can operate at variable frame rates, such as the ToF image sensor assumed for the present embodiment.
Accordingly, the finite state machine 1 15 controls the cursor detector 1 13A, dynamic gesture detector 113B and static pose recognition module 114 such that higher frame rates are provided for more time-critical tasks such as those performed in cursor location and tracking block 302 and dynamic gesture recognition block 304, while lower frame rates are provided for less time-critical tasks such as those performed by static pose recognition module 1 14. The frame rate is dynamically varied at runtime depending upon whether the current frame is determined to contain a cursor, a dynamic gesture or a static gesture. The dynamic variation of the frame rate at runtime can be achieved in the recognition subsystem 108 of GR system 1 10 by acquiring the next frame immediately when the current frame has been processed, rather than acquiring input frames at a fixed rate. Those frames processed through the cursor location and tracking block 302 or dynamic gesture recognition block 304 responsive to respective detection of a cursor or a dynamic gesture by detector 1 13A or 113B will be processed much more quickly than those frames in which a cursor or a dynamic gesture is not detected. Accordingly, the FIG. 4 embodiment permits faster processing of a current frame and faster acquisition of a subsequent frame upon detection of a cursor or a dynamic gesture in the current frame.
If the image sensor supplying input images to the image processor 102 does not support a variable frame rate, dynamic variation of the frame rate can still be achieved in the GR system 1 10 by, for example, skipping one or more input frames in order to emulate variable frame rate image sensor functionality.
It is also possible in a given embodiment that the cursor detector 1 13 A, dynamic gesture detector 1 13B and static pose recognition module 1 14 each operate at a different frame rate. Additionally, other embodiments can be configured such that all three of these components operate at the same frame rate.
The recognition subsystem 108 in the FIG. 4 embodiment may be viewed as being separated into distinct portions for detection and processing of cursors, dynamic gestures and static gestures, respectively. Different combinations of hardware, software and firmware can be used for each of these portions. The finite state machine 1 15 in the present embodiment may be viewed as controlling selective enabling of the portions such only one of the portions is enabled at a time. Thus, references herein to selective enabling of cursor detector 1 13A, dynamic gesture detector 113B and static pose recognition module 1 14 should be broadly construed so as to encompass in some embodiments selective enabling of respective associated elements such as curser location and tracking block 302 for cursor detector 1 13 A, dynamic gesture recognition block 304 and dynamic gesture parameters evaluation block 308 for dynamic gesture detector 1 13B, and static pose parameters evaluation block 310 for static pose recognition module 114.
The cursor detector 1 13 A is configured to detect the presence of a cursor pose within the current frame. As noted above, a cursor pose may comprise a pointing finger pose or fingergun pose for short range applications, and pointing hand or other arm or body poses for long range applications. The cursor detector combines all other non-cursor poses into a single recognition class, illustratively denoted as an "other pose" class, which significantly reduces the number of classes from the eight or more used for respective static poses in a typical gesture vocabulary to two or three classes. Such an arrangement allows the use of efficient and time-saving recognition algorithms without affecting the recognition quality. For example, the cursor detector 1 13A can be implemented using relatively simple threshold logic by calculating the size of the hand nearest to a controlled device and comparing the calculated size to a specified threshold. If the hand size is below the threshold, it is recognized as a pointing finger or pointing hand, and the pose is recognized as a cursor pose. Numerous other implementations of the cursor detector module are possible.
The dynamic gesture detector 1 13B is configured to detect the presence of a dynamic gesture pose within the current frame. Again, all static poses that are not used to define dynamic gestures can be combined into a single recognition class in order to simplify the dynamic gesture detector. For example, the dynamic gesture detector can be configured to operate using four classes of static poses, namely, a palm class used for swipe gestures, a palm with fingers class, a palm with pinch class used for zoom gestures, and the "other pose" class. One possible implementation of the dynamic gesture detector in the present embodiment also utilizes relatively simple threshold logic by calculating velocities for parts of the image and comparing the calculated velocities to respective specified thresholds. If the calculated velocities exceed the thresholds, significant motion is detected and the detector determines that the gesture in the current frame is not static. This example assumes that the definition of a static gesture includes no significant motion.
In some embodiments, the dynamic gesture detector 113B may also be configured to perform dynamic gesture recognition. Accordingly, in these embodiments, the separate dynamic gesture recognition block can be eliminated.
It should be noted that various parameters computed by the cursor detector 113A or dynamic gesture detector 1 13B may be provided to the respective cursor location and tracking block 302 and dynamic gesture recognition block 304. For example, parameters such as finger coordinates and velocity computed by the cursor detector may be provided to the cursor location and tracking block 302 for application of averaging or other noise reduction operations. Also, some of the parameters computed by the cursor detector can be provided to the dynamic gesture detector, and vice versa. For example, an ROI mass center velocity computed by one of the detectors 1 13 may be re-used by the other.
Recognition subsystem components such as static pose recognition module 114, cursor location and tracking block 302, dynamic gesture recognition block 304 and parameters evaluation blocks 308 and 310 may be configured differently in the FIG. 4 embodiment than in the FIG. 3 embodiment, depending upon what parameters are computed by prior blocks or shared between blocks in the FIG. 4 embodiment.
The cursor detector 113A, dynamic gesture detector 1 13B and static pose recognition module 114 have associated therewith respective decision blocks 412, 414 and 415 which determine whether or not the corresponding cursor, dynamic gesture or static pose have been detected in the current frame. The decision blocks 412, 414 and 415, although shown in the figure as being separate from the respective cursor detector 1 13A, dynamic gesture detector 1 13B and static pose recognition module 1 14, can in other embodiments be incorporated within those respective elements.
The recognition subsystem 108 implements real time gesture recognition using a variable frame rate depending on the current state of the finite state machine 1 15 and the outputs of the decision blocks 412, 414 and 415. Additional decision blocks in the FIG. 4 embodiment include decision blocks 416, 417 and 418.
The outputs of the static pose recognition module 1 14, cursor location and tracking block 302, dynamic gesture recognition block 304, and parameters evaluation blocks 308 and 310 are generally consistent with their respective outputs as previously described in conjunction with the embodiment of FIG. 3. Thus, for example, static pose recognition module 114 when enabled generates a static pose pattern ID and optionally one or more associated parameters, cursor location and tracking block 302 when enabled generates cursor parameters, dynamic gesture recognition block 304 when enabled generates a dynamic gesture pattern ID, parameters evaluation block 308 when enabled generates parameters associated with the dynamic gesture pattern ID, and parameters evaluation block 310 when enabled generates additional parameters associated with the static pose pattern ID.
It is assumed that all of the cursor, dynamic gesture and static pose pattern IDs are different from one another, and that a zero pattern ID corresponds to an unrecognized gesture. The latter situation in FIG. 4 corresponds to a negative output from decision block 418 indicating that no gesture is detected in the current frame.
In the FIG. 4 embodiment, an affirmative output from decision block 412 or decision block 414 will lead to application of respective cursor location and tracking block 302 or dynamic gesture recognition block 304. Negative outputs from the decision blocks 412 and 414 are not explicitly shown in FIG. 4, but are processed in the manner indicated in FIG. 5. An affirmative output from decision block 415 will lead to decision block 416, which directs the process to the cursor location and tracking block 302 if the recognized static pose is a cursor pose, and otherwise directs the process to static pose parameters evaluation block 310. It is therefore possible for the static pose recognition module 114 to detect a cursor pose even if the cursor detector 113A did not detect a cursor pose in its initial detection iteration, due to additional image enhancements performed in the course of static pose recognition.
A negative output from decision block 415 will lead to decision block 417, which directs the process to the cursor location and tracking block 302 if the finite state machine 1 15 is still in a cursor detected state from a previous frame, and otherwise directs the process to decision block 418. An affirmative output from decision block 418 indicates that the finite state machine 1 15 is still in a dynamic gesture detected state from a previous frame, and the process is directed to the dynamic gesture recognition block 304. A negative output from decision block 418 indicates that no gesture has been detected in the current frame and this information is provided to application 118-1. The decision blocks 417 and 418 are therefore configured such that if no static pose is detected by the static pose recognition module 1 14, and the finite state machine is in either its cursor detected or dynamic gesture detected state, the decision is made using the finite state machine state. This additional correction significantly decreases the misdetection rate of the G system.
FIG. 5 shows a more detailed view of the control functionality provided by finite state machine 1 15 in relation to cursor detector 1 13A and its associated blocks 412 and 302, dynamic gesture detector 1 13B and its associated blocks 414 and 304, and static pose recognition module 1 14. Additional decision blocks 500 and 502 are shown in FIG. 5 and are assumed to be present in the embodiment 400 but are omitted from FIG. 4 for simplicity and clarity of illustration.
If decision block 500 determines that an initial state of the finite state machine 1 15 for a current frame is a dynamic gesture detected state, based on a determination made for a previous frame, the dynamic gesture detector 113B is initially enabled for the current frame. However, if decision block 500 determines that the initial state of the finite state machine for the current frame is not a dynamic gesture detected state, the cursor detector 1 13A is initially enabled for the current frame.
Therefore, depending on the initial state of the finite state machine 115 in the current frame, either the cursor detector 113A or the dynamic gesture detector 1 13B is activated first for the current frame. If a dynamic gesture was detected in the previous frame, the finite state machine will initially be in the dynamic gesture detected state in the current frame, and the dynamic gesture detector is enabled first in the current frame. Otherwise, the cursor detector is enabled first in the current frame. Assuming by way of example that the cursor detector 113A is initially enabled, decision block 412 indicates whether or not the cursor detector detects a cursor in the current frame. If a cursor is detected by the cursor detector for the current frame, cursor location and tracking block 302 is applied using a cursor gesture pattern ID provided by the cursor detector 1 13A. If a cursor is not detected by the cursor detector for the current frame, the finite state machine 1 15 enables the dynamic gesture detector 1 13B for the current frame.
If decision block 414 indicates that a dynamic gesture is detected by the dynamic gesture detector 1 13B for the current frame, dynamic gesture recognition block 304 is applied. If a dynamic gesture is not detected by the dynamic gesture detector for the current frame, and the finite state machine 115 is still in a dynamic gesture detected state from a previous frame, the finite state machine enables the cursor detector 1 13A for the current frame. Processing then continues through decision block 412 as previously described. If a dynamic gesture is not detected by the dynamic gesture detector, and if the decision block 502 indicates that the finite state machine is not in a dynamic gesture detected state, the finite state machine enables the static pose recognition module 114 for the current frame.
Accordingly, in the present embodiment, the finite state machine control is configured such that the static pose recognition module 1 14 is enabled for the current frame only if a cursor is not detected by the cursor detector 1 13A and a dynamic gesture is not detected by the dynamic gesture detector 1 13B. Again, other types of finite state machine control can be provided in other embodiments.
FIG. 6 illustrates the manner in which the state of the finite state machine 1 15 is updated in conjunction with completion of the recognition processing for the current frame. More particularly, in this exemplary state update module, the outputs of the cursor detector 1 13 A, dynamic gesture detector 1 13B and static pose recognition module 1 14 are applied to a maximization element 600, the output of which is used to determine a new state 602 for the finite state machine.
The outputs of the respective cursor detector, dynamic gesture detector and static pose recognition module comprise the respective cursor gesture pattern ID, dynamic gesture pattern ID and static pose pattern ID if any such IDs were detected. If one or more of the cursor detector, dynamic gesture detector and static pose recognition module were not enabled under control of the finite state machine in the current frame, or if enabled in the current frame did not result in an affirmative detection decision, its output is a zero as indicated in the figure. It is assumed that the finite state machine control in the present embodiment ensures that only one of the cursor detector, dynamic gesture detector and static pose recognition module will generate an affirmative detection decision in the current frame.
Accordingly, the maximization element 600 will determine the new state 602 for the finite state machine as one of the cursor detected state, the dynamic gesture detected state or the static pose recognition state, based on which of the corresponding pattern ID outputs was nonzero for the current frame. This new state 602 becomes the final state for the finite state machine in the current frame, and as indicated previously also serves as the initial state of the finite state machine for the next frame.
The particular types and arrangements of processing blocks shown in the embodiments of
FIGS. 2 through 6 are exemplary only, and additional or alternative blocks can be used in other embodiments. For example, blocks illustratively shown as being executed serially in the figures can be performed at least in part in parallel with one or more other blocks or in other pipelined configurations in other embodiments.
The illustrative embodiments provide significantly improved gesture recognition performance relative to conventional arrangements. For example, these embodiments can support higher frame rates than would otherwise be possible by substantially reducing the amount of processing time required when cursors or dynamic gestures are detected. Accordingly, the GR system performance is accelerated while ensuring high precision in the recognition process. The disclosed techniques can be applied to a wide range of different GR systems, using depth, grayscale, color, infrared and other types of imagers which support a variable frame rate, as well as imagers which do not support a variable frame rate, and in both short range applications using hand gestures and long range application using arm or body gestures.
Different portions of the GR system 1 10 can be implemented in software, hardware, firmware or various combinations thereof. For example, software utilizing hardware accelerators may be used for some processing blocks while other blocks are implemented using combinations of hardware and firmware.
At least portions of the GR-based output 1 12 of GR system 1 10 may be further processed in the image processor 102, or supplied to another processing device 106 or image destination, as mentioned previously.
It should again be emphasized that the embodiments of the invention as described herein are intended to be illustrative only. For example, other embodiments of the invention can be implemented utilizing a wide variety of different types and arrangements of image processing circuitry, modules, processing blocks and associated operations than those utilized in the particular embodiments described herein. In addition, the particular assumptions made herein in the context of describing certain embodiments need not apply in other embodiments. These and numerous other alternative embodiments within the scope of the following claims will be readily apparent to those skilled in the art.

Claims

Claims What is claimed is:
1. A method comprising:
configuring a gesture recognition system to include a cursor detector, a dynamic gesture detector and a static pose recognition module; and
providing a finite state machine to control selective enabling of the cursor detector, the dynamic gesture detector and the static pose recognition module;
wherein the configuring and providing are implemented in an image processor comprising a processor coupled to a memory.
2. The method of claim 1 wherein the finite state machine has a plurality of states including:
a cursor detected state in which cursor location and tracking are applied responsive to detection of a cursor in a current frame;
a dynamic gesture detected state in which dynamic gesture recognition is applied responsive to detection of a dynamic gesture in the current frame; and
a static pose recognition state in which static pose recognition is applied responsive to failure to detect a cursor or a dynamic gesture in the current frame.
3. The method of claim 1 wherein a final state of the finite state machine for a current frame is determined as a function of outputs of respective ones of the cursor detector, dynamic gesture detector and static pose recognition module for the current frame.
4. The method of claim 3 wherein the final state of the finite state machine for the current frame is utilized as an initial state of the finite state machine for a subsequent frame.
5. The method of claim 3 wherein an initial state of the finite state machine for the current frame is given by a final state of the finite state machine for a previous frame.
6. The method of claim 1 wherein the finite state machine is configured such that only one of the cursor detector, dynamic gesture detector and static pose recognition module is enabled at a time.
7. The method of claim 1 wherein the cursor detector and the dynamic gesture detector operate at a higher frame rate than the static pose recognition module.
8. The method of claim 1 wherein the finite state machine is configured to adjust a frame rate of operation of the gesture recognition system responsive to outputs of the cursor detector and the dynamic gesture detector.
9. The method of claim 1 wherein if an initial state of the finite state machine for a current frame is a dynamic gesture detected state, the dynamic gesture detector is initially enabled for the current frame.
10. The method of claim 9 wherein if a dynamic gesture is detected by the dynamic gesture detector for the current frame, dynamic gesture recognition is applied, and if a dynamic gesture is not detected by the dynamic gesture detector for the current frame, the finite state machine enables the cursor detector for the current frame.
1 1. The method of claim 10 wherein if a dynamic gesture is not detected by the dynamic gesture detector and a cursor is not detected by the cursor detector, the finite state machine enables the static pose recognition module for the current frame.
12. The method of claim 1 wherein if an initial state of the finite state machine for a current frame is not a dynamic gesture detected state, the cursor detector is initially enabled for the current frame.
13. The method of claim 12 wherein if a cursor is detected by the cursor detector for the current frame, cursor location and tracking is applied, and if a cursor is not detected by the cursor detector for the current frame, the finite state machine enables the dynamic gesture detector for the current frame.
14. The method of claim 13 wherein if a dynamic gesture is not detected by the dynamic gesture detector and a cursor is not detected by the cursor detector, the finite state machine enables the static pose recognition module for the current frame.
15. A non-transitory computer-readable storage medium having computer program code embodied therein, wherein the computer program code when executed in the image processor causes the image processor to perform the method of claim 1.
16. An apparatus comprising:
an image processor comprising image processing circuitry and an associated memory;
wherein the image processor is configured to implement a gesture recognition system utilizing the image processing circuitry and the memory, the gesture recognition system comprising:
a cursor detector;
a dynamic gesture detector;
a static pose recognition module; and
a finite state machine configured to control selective enabling of the cursor detector, the dynamic gesture detector and the static pose recognition module.
17. The apparatus of claim 16 wherein the finite state machine has a plurality of states including:
a cursor detected state in which cursor location and tracking are applied responsive to detection of a cursor in a current frame;
a dynamic gesture detected state in which dynamic gesture recognition is applied responsive to detection of a dynamic gesture in the current frame; and
a static pose recognition state in which static pose recognition is applied responsive to failure to detect a cursor or a dynamic gesture in the current frame.
18. The apparatus of claim 16 wherein the finite state machine is configured such that only one of the cursor detector, dynamic gesture detector and static pose recognition module is enabled at a time.
19. An integrated circuit comprising the apparatus of claim 16.
20. An image processing system comprising the apparatus of claim 16.
PCT/US2014/035838 2013-10-25 2014-04-29 Finite state machine cursor and dynamic gesture detector recognition WO2015060896A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/358,358 US20150220153A1 (en) 2013-10-25 2014-04-29 Gesture recognition system with finite state machine control of cursor detector and dynamic gesture detector

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
RU2013147803/08A RU2013147803A (en) 2013-10-25 2013-10-25 GESTURE RECOGNITION SYSTEM WITH FINITE AUTOMATIC CONTROL OF INDICATOR DETECTION UNIT AND DYNAMIC GESTURE DETECTION UNIT
RU2013147803 2013-10-25

Publications (1)

Publication Number Publication Date
WO2015060896A1 true WO2015060896A1 (en) 2015-04-30

Family

ID=52993331

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/035838 WO2015060896A1 (en) 2013-10-25 2014-04-29 Finite state machine cursor and dynamic gesture detector recognition

Country Status (3)

Country Link
US (1) US20150220153A1 (en)
RU (1) RU2013147803A (en)
WO (1) WO2015060896A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926454A (en) * 2021-02-26 2021-06-08 重庆长安汽车股份有限公司 Dynamic gesture recognition method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160266647A1 (en) * 2015-03-09 2016-09-15 Stmicroelectronics Sa System for switching between modes of input in response to detected motions
US10222869B2 (en) * 2015-08-03 2019-03-05 Intel Corporation State machine based tracking system for screen pointing control
GB201706300D0 (en) 2017-04-20 2017-06-07 Microsoft Technology Licensing Llc Debugging tool
CN109634415B (en) * 2018-12-11 2019-10-18 哈尔滨拓博科技有限公司 It is a kind of for controlling the gesture identification control method of analog quantity
CN112115801B (en) * 2020-08-25 2023-11-24 深圳市优必选科技股份有限公司 Dynamic gesture recognition method and device, storage medium and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110080490A1 (en) * 2009-10-07 2011-04-07 Gesturetek, Inc. Proximity object tracker
US20120069055A1 (en) * 2010-09-22 2012-03-22 Nikon Corporation Image display apparatus
US20130004016A1 (en) * 2011-06-29 2013-01-03 Karakotsios Kenneth M User identification by gesture recognition
US20130050425A1 (en) * 2011-08-24 2013-02-28 Soungmin Im Gesture-based user interface method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110080490A1 (en) * 2009-10-07 2011-04-07 Gesturetek, Inc. Proximity object tracker
US20120069055A1 (en) * 2010-09-22 2012-03-22 Nikon Corporation Image display apparatus
US20130004016A1 (en) * 2011-06-29 2013-01-03 Karakotsios Kenneth M User identification by gesture recognition
US20130050425A1 (en) * 2011-08-24 2013-02-28 Soungmin Im Gesture-based user interface method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KUMAR ET AL.: "Gesture based 3D Man-Machine Interaction using a Single Camera'';", IEEE CONFERENCE ON MULTIMEDIA COMPUTING AND SYSTEMS, vol. 1, July 1999 (1999-07-01), pages 630 - 635, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=779273&urt=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D779273> [retrieved on 20140820] *
QUEK, F.: "Unencumbered Gestural Interaction'';", IEEE MULTIMEDIA, vol. 3, no. ISSUE, 6 August 2002 (2002-08-06), pages 36 - 47, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/xplilogin.jsp?tp=&amumber=556459&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs-all.jsp%3Farnumber%3D556459> [retrieved on 20140813] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926454A (en) * 2021-02-26 2021-06-08 重庆长安汽车股份有限公司 Dynamic gesture recognition method
CN112926454B (en) * 2021-02-26 2023-01-06 重庆长安汽车股份有限公司 Dynamic gesture recognition method

Also Published As

Publication number Publication date
RU2013147803A (en) 2015-04-27
US20150220153A1 (en) 2015-08-06

Similar Documents

Publication Publication Date Title
US20150220153A1 (en) Gesture recognition system with finite state machine control of cursor detector and dynamic gesture detector
USRE48780E1 (en) Method and apparatus for extracting static pattern based on output of event-based sensor
US9852495B2 (en) Morphological and geometric edge filters for edge enhancement in depth images
KR102433931B1 (en) Method and device for recognizing motion
US9958938B2 (en) Gaze tracking for a mobile device
US20150269425A1 (en) Dynamic hand gesture recognition with selective enabling based on detected hand velocity
KR20140017829A (en) Device of recognizing predetermined gesture based on a direction of input gesture and method thereof
US20150310264A1 (en) Dynamic Gesture Recognition Using Features Extracted from Multiple Intervals
US11375244B2 (en) Dynamic video encoding and view adaptation in wireless computing environments
KR20170056860A (en) Method of generating image and apparatus thereof
KR20150067680A (en) System and method for gesture recognition of vehicle
US10009598B2 (en) Dynamic mode switching of 2D/3D multi-modal camera for efficient gesture detection
US20160026857A1 (en) Image processor comprising gesture recognition system with static hand pose recognition based on dynamic warping
US11416078B2 (en) Method, system and computer program for remotely controlling a display device via head gestures
CN111722245A (en) Positioning method, positioning device and electronic equipment
US9857878B2 (en) Method and apparatus for processing gesture input based on elliptical arc and rotation direction that corresponds to gesture input
CN111164544A (en) Motion sensing
WO2015065520A1 (en) Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition
WO2015012896A1 (en) Gesture recognition method and apparatus based on analysis of multiple candidate boundaries
US20170371417A1 (en) Technologies for adaptive downsampling for gesture recognition
US20150146920A1 (en) Gesture recognition method and apparatus utilizing asynchronous multithreaded processing
WO2015119657A1 (en) Depth image generation utilizing depth information reconstructed from an amplitude image
JP2023010769A (en) Information processing device, control method, and program
US20150139487A1 (en) Image processor with static pose recognition module utilizing segmented region of interest
US9323995B2 (en) Image processor with evaluation layer implementing software and hardware algorithms of different precision

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14358358

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14855360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14855360

Country of ref document: EP

Kind code of ref document: A1