CN105308536A - Dynamic user interactions for display control and customized gesture interpretation - Google Patents

Dynamic user interactions for display control and customized gesture interpretation Download PDF

Info

Publication number
CN105308536A
CN105308536A CN201480014375.1A CN201480014375A CN105308536A CN 105308536 A CN105308536 A CN 105308536A CN 201480014375 A CN201480014375 A CN 201480014375A CN 105308536 A CN105308536 A CN 105308536A
Authority
CN
China
Prior art keywords
posture
user
space
response
finger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201480014375.1A
Other languages
Chinese (zh)
Inventor
D·霍尔兹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LMI Clearing Co.,Ltd.
Ultrahaptics IP Ltd
Original Assignee
Leap Motion Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leap Motion Inc filed Critical Leap Motion Inc
Priority to CN202110836174.1A priority Critical patent/CN113568506A/en
Publication of CN105308536A publication Critical patent/CN105308536A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Abstract

The technology disclosed relates to distinguishing meaningful gestures from proximate non-meaningful gestures in a three-dimensional (3D) sensory space. It also relates to uniformly responding to gestural inputs from a user irrespective of a position of the user. It further relates to detecting if a user has intended to interact with a virtual object based on measuring a degree of completion of gestures and creating interface elements in the 3D space. The technology disclosed relates to distinguishing meaningful gestures from proximate non-meaningful gestures in a three-dimensional (3D) sensory space. It also relates to uniformly responding to gestural inputs from a user irrespective of a position of the user. It further relates to detecting if a user has intended to interact with a virtual object based on measuring a degree of completion of gestures and creating interface elements in the 3D space.

Description

The dynamic subscriber controlling and customize posture explanation for display is mutual
Technical field
Disclosed technology relates in general to display and controls and gesture recognition, particularly, relates to based on the mutual display control of dynamic subscriber and uses the gesture of free space as the input of user to machine.
Background technology
Traditionally, user and electronic installation (as computing machine or TV) or computing application program (as computer game, multimedia application or office application program) are carried out alternately via indirectly input media, and described input media comprises such as keyboard, operating rod or remote controllers.User handles input media to perform specific action, such as, from actions menu, select particular items.But modern input media often comprises multiple button in the configuration of complexity, to facilitate the pass down the line of user to electronic installation or computing application program; The proper operation of these input medias is to the normally individual challenge of user.In addition, the action that input media performs does not correspond to obtained change with any visual sense usually, the change on the screen display that described change case is controlled by device in this way.Input media also may be lost, and the frequent experience finding the device mislayed has become ordinary affair gloomy in the modern life.
The touch-screen directly implemented on user control eliminates the requirement to independently input media.Touch-screen detects existence and the position of " touch " action that the finger of user or other object are made on a display screen, enables user input desired input by the appropriate area of touch screen simply.Although it is applicable to such as panel computer and wireless telephonic compact optical display devices, the large-scale entertainment device that touch-screen is watched from afar to user is inapplicable.Particularly for the game implemented on such devices, electronics manufacturer has developed the motion or posture the system that display is responded limited when that detect user.Such as, user near televisor can make slip gesture, described gesture is detected by gesture recognition system, TV can respond the posture detected, control panel is also presented on screen by active control panel, thus allows user to use gesture subsequently to select thereon, such as, can move in the direction of its hand along " upwards " or " downwards " by user, be again detected and explain that described motion is to assist channel selection.
Although these systems have created and consumed stimulation in a large number, finally may replace the conventional control mode needing to exist physical contact between user and control element, current device has been limited to low detection sensitivity.User is required to make significantly, be often exaggeration and be laughable motion sometimes to cause response from gesture recognition system.Because low resolution, can't detect gesture or be used as noise treatment by a small margin.Such as, be the distance making cursor move through a centimetre on TV, the hand of user may must traverse much bigger distance.This coupling, not only adds loaded down with trivial details operation burden to user, particularly when motion is restricted, and, again reduce the intuitive relationship between gesture and response.In addition, the response of this system is usually single, always i.e., the corresponding identical screen virtual controlling increment of the physical extent of gesture, no matter what the wish of user is.
Therefore, have an opportunity to introduce new gesture recognition system, it detects gesture by a small margin in real time, and allows user to adjust relation between physical motion and the corresponding action shown on screen.
In order to select the virtual objects of the expectation shown on the screen of electronic installation, user may need its hand to sweep a large distance.Due to muting sensitivity, sweeping too short distance may can not detect or be regarded as noise, thus makes required virtual objects keep not selected.As a result, user may find oneself to repeat to make the identical gesture with different motion degree, until required selection is identified.The repeatability of gesture not only bothers, and is difficult to make user determine when successfully have selected virtual objects exactly.Therefore, the gesture recognition system needing a kind of posture of indicating user to complete.
In addition, the user action be intended to as single posture may still relate to multiple motion that is mutually related, and each of described motion can be regarded as independent posture.Consequently, traditional gesture recognition system may the intention of correctly interpreting user, therefore transmits fault-signal (or basic no signal) to controlled electronic installation.Suppose that such as user brandishes its arm, bending its is pointed unconsciously simultaneously; Due to the motion that is mutually related, the posture that gesture recognition system possibility None-identified is expected, or may indicate and make two kinds of postures (it may clash, and device is at a loss, or in gesture one may respond admissible input).
But existing system relies on input element (such as, computer mouse and keyboard) to carry out supplementary any gesture identification that it can be made.These systems lack user interface elements required outside simple order, and usually, only after user is provided with gesture recognition environment by keyboard and mouse, just can identify that these are ordered.Therefore, there is further chance to introduce new gesture recognition system, allow user can with more advanced mode and application program and game interactive widely.
Summary of the invention
Accurately and rapidly (namely the embodiment of disclosed technology relates to and having the high detection sensitivity of the posture of user to allow user, time delay without any unnecessary) use ability of posture control electronic installation by a small margin, and the method and system of relation between the physical extent controlling posture in some embodiments and the response of display caused.In different embodiments, one or more body parts that user assumes a position (are referred to as below " posture ", such as, point, hand, arm etc.) shape and position first detected in captured two dimension (2D) image and identify; Then, the time set of the posture in one group of seasonal effect in time series image is assembled, so that the posture in Reconstruction of three-dimensional (3D) space.The intention of user can by such as being undertaken contrasting identifying by the posture detected and the one group of posture record stored in a database.Each posture record all by a gesture detected (such as, be encoded to vector) with an action, order or other input and be associated, by the application program process-such as of current operation, to be invoked at corresponding instruction or instruction sequence that application program performs, or provide parameter value or other input data.Because the gesture recognition system in disclosed technology provides high detection sensitivity, can accurately detection and Identification region be (such as, finger) experiencing small oscillating movements (such as, the motion of several millimeters), thus user is carried out with electronic installation and/or the application program shown thereon exactly alternately.
Some embodiments of disclosed technology differentiate leading posture in real time from the motion do not associated that all can be considered posture, and can export the signal of the leading gesture of instruction.According to the method and system of disclosed technology ideally to can be considered that the motion of user's posture has high detection sensitivity, this ability allow when combining with the quick resolving ability of leading posture user accurately and promptly (that is, without any unnecessary time delay) control electronic installation.
In different embodiments, when multiple posture (such as, arm brandishes posture and digital flexion) being detected, the leading posture of gesture recognition system identification user.Such as, gesture recognition system can will be brandished posture and is designated and brandishes track by being calculated, and digital flexion posture will be designated the track of five independent (and being more by a small margin).Every bar track can convert vector to along the Euler's degree of freedom of six in such as Euler space.The vector with amplitude peak represents the leading composition of this motion (such as, in this case, brandishing), and remaining vector can be left in the basket, or with the mode process different from the leading posture of process.In some embodiments, the vector filtrator that filtering technique can be used to implement is applied to described multiple vector, to filter out small vector and to identify leading vector.Can repeat, this process of iteration, until identify the leading composition of a vector-motion.Then, the fundamental component identified can be used to operating electronic device or its application program.
In some embodiments, gesture recognition system is enabled or is provided screen virtual (on-screen) display, to show posture performance level in real time.Such as, gesture recognition system can by carrying out mating identifying posture with the record of database by it, described database comprises multiple image, and each in described multiple image is associated with a performance level (such as, from 1% to 100%) of made posture.Then, the performance level of the gesture made is present on screen.Such as, when user makes its finger motion click near electronic installation to make or touch posture, device, display can illustrate hollow circular icon, presents hollow circular icon described in application program color filling, how long completes this posture in addition with the motion of indicating user.When user makes click completely or touches posture, circle is completely filled, and this may cause such as being labeled as the virtual objects of expectation by the object selected.Performance level indicator thus enable definite moment of user's identification selection virtual objects.
In other embodiments, virtual screen virtual disk can be used for the value by following choice variable or other parameter: allow user to carry out sliding disc by pressing its side.User can create other user interface elements by further gesture, once create this element, then used as the input of software application or control piece.
In one embodiment, gesture recognition system is provided for the function of the relation between static or its actual motion of dynamic conditioning of user and the response that causes, and described caused response is such as the motion of the object be presented on the screen of electronic installation.In static state operation, user is by handling the slide switch of display or handling other icon and manually arrange this level of sensitivity, and other icon described such as uses the gesture recognition system illustrated herein.In dynamic operation, system from the pattern of the response of the character of the distance between dynamic response user and device, shown activity, available physical space and/or user oneself (such as, the volume in the space be limited in wherein is seemed, this response of bi-directional scaling) based on user's posture.Such as, when limited available space, user relation can be adjusted to be less than 1 ratio (such as, 1:10), each unit of its actual motion (such as, 1 millimeter) is made to cause being presented at 10 units (such as 10 pixels or 10 millimeters) of the object motion on screen.Similarly, when user is relatively close to electronic installation, this relation can be adjusted (or the device of the distance sensing user can adjust automatically) to being greater than the ratio (such as, 10:1) of 1 to compensate by it.Therefore, the actual motion of adjustment user provides extra dirigibility with the ratio (such as, object motion) being presented at the action caused on screen, for user's remote command electric device and/or control display thereon in virtual environment.
According to a kind of embodiment, disclosed technology also relates to filtration posture.Particularly, it is related to and the posture of the concern in three-dimensional (3D) sense space and non-interesting posture being differentiated by following: the characteristic of benchmark posture user limited and contrasting in the actual posture that 3D sense space is made.Based on this contrast, from all postures made 3D sense space, filter out the posture of the concern of a group.
According to another kind of embodiment, disclosed technology is also related to specific customization posture and explains.Particularly, it relates to by the following parameter arranging identification posture: prompting user selects the characteristic value of posture.In one embodiment, disclosed technology comprises the concentrated demonstration of the characteristic boundaries performing posture.It also comprises and performs complete posture by prompting user and demonstrate and posture explanation is tested in the assessment receiving the relevant explanation of user.
Other side and the advantage of this technology can be understood by the accompanying drawing after checking, detailed description and claim.
Accompanying drawing explanation
In the accompanying drawings, in all different views, identical reference character is often referred to similar parts.In addition, accompanying drawing is not necessarily drawn in proportion, and on the contrary, emphasis is placed on the principle of the technology disclosed in illustration usually.In the following description, the different embodiments of disclosed technology are described with reference to the following drawings, wherein:
Figure 1A shows the system of catching view data of a kind of embodiment according to disclosed technology.
Figure 1B is the simplified block diagram of the gesture recognition system of the enforcement image analysis apparatus of a kind of embodiment according to disclosed technology.
Fig. 2 A depict according to a kind of embodiment of disclosed technology by the electronic installation of the ability of posture control of user.
Fig. 2 B depicts the multiple postures detected by gesture recognition system of a kind of embodiment according to disclosed technology.
Fig. 3 A and 3B depicts the screen virtual indicator of the performance level of the posture of the reflection user of a kind of embodiment according to disclosed technology.
Fig. 3 C is the process flow diagram that the method selecting the time of the virtual objects of virtual objects also subsequently in time selected by manipulation according to the prediction user of a kind of embodiment of disclosed technology is shown.
Fig. 4 A and 4B describes according to the relation between the actual motion of the dynamic conditioning user of a kind of embodiment of disclosed technology and the action caused being presented on screen.
Fig. 4 C is the process flow diagram of the method for relation between the actual motion of the dynamic conditioning user of a kind of embodiment illustrated according to disclosed technology and the action caused being presented on screen.
Fig. 5 A and 5B describes the disk user interface elements according to a kind of embodiment of disclosed technology.
Fig. 6 is the process flow diagram of the method for the filtration posture of a kind of embodiment illustrated according to disclosed technology.
Fig. 7 is the process flow diagram that the method explained according to the customization posture of a kind of embodiment of disclosed technology is shown.
Fig. 8 A, 8B and 8C illustrate the exemplary training guidance flow process limiting posture according to the user of a kind of embodiment of disclosed technology.
describe in detail
The embodiment of disclosed technology relates to use sound signal to reduce the method and system of power consumption operation motion capture system.Such as, can associate the image of a sequence, to construct the 3-D model of object, described 3-D model comprises position and the shape of object.Identical technical Analysis continuous print image can be used, to build the model of the motion (such as, the posture of free form) of object.When insufficient light, when not carrying out optical identification with enough fiduciary levels to the posture of free form, sound signal can provide direction and the position of object, as further described herein.
As used herein, if signal, event or value that signal before, event or value impact are given, then signal, event or value before given signal, event or value " depend on ".If have intermediate processing element, step, action or time period, then signal, event or value before given signal, event or value still can " depend on ".If intermediate processing element or step are in conjunction with multiple signal, event or value, then the signal for the treatment of element or step exports the input being considered to " depending on " each signal, event or value.If Setting signal, event or value are with signal before, event or be worth identical, then this is only degeneracy situation, signal, event or value before wherein given signal, event or value are still considered to " depending on ".Limit given signal, event or value " response " to another signal, event or value similarly.
First show a kind of exemplary gesture recognition system 100A with reference to Figure 1A, Figure 1A, described system comprises a pair video camera 102,104, and described video camera is attached to image analysis system 106.Video camera 102,104 can be the video camera of any type, comprise the video camera to whole visible spectrum sensitivity, or more typically, the wavelength band (such as, infrared (IR) or ultraviolet wavelength band) in certain limit is had to the video camera of the susceptibility of enhancing; More generally, at this, term " video camera " refers to that the image can catching object also represents any device (or combination of device) of this image in the form of digital data.Although use the example of the embodiment of two video cameras to illustrate, also easily implement to use the video camera of different number or other embodiment of non-video camera photosensitive image sensor or its combination.Such as, line sensor (linesensor) or line video camera (linecamera) can be used but not traditional device of two dimension (2D) image can be caught.Term " light " is commonly used to refer to any electromagnetic radiation, and it can or can not in visible spectrum, and can be broadband (such as, white light) or arrowband (such as, single wavelength or narrowband wavelength).
Although do not have special frame rate requirement, video camera 102,104 preferably can capture video images (that is, the successive image frame of the constant rate of speed of 15 frames at least per second).The function of video camera 102,104 is non-key to disclosed technology, video camera can have different frame frequencies, image resolution ratio (such as, the pixel of every width image), color or brightness (intensity) resolution (such as, the bit number of every pixel brightness data), camera lens focal length, the depth of field etc.Generally, for specific application program, any video camera that can focus on the object in the spatial volume paid close attention to can be used.Such as, in order to catch the motion of the hand of static people, the volume of concern can be restricted to the cube of about one meter, side.
In some embodiments, the system 100A illustrated comprises a pair source 108,110, and described source can be arranged on video camera 102,104 either side, and is controlled by image analysis system 106.In one embodiment, source 108,110 is light sources.Such as, light source can be infrared light supply, such as infrarede emitting diode (LED), and video camera 102,104 can be responsive to infrared light.Use infrared luminous energy to allow gesture recognition system 100A to work under the light condition of wide scope, and can avoid may the various inconvenience relevant to guide visible ray to enter region that people moves or interference.But, require specific wavelength or the region of electromagnetic spectrum.In one embodiment, filtrator 120,122 is placed on video camera 102,104 front, to filter out visible ray, only infrared light is present in the image of being caught by video camera 102,104.In another embodiment, source 108,110 is acoustic wave sources.Acoustic wave source launches sound wave to user; The sound wave that user is blocked (or " sound wave coverage ") or change (or " sonic wave shift ") is impacted to user.This sound wave is covered and/or sonic wave shift also may be used for the gesture detecting user.In some embodiments, sound wave is such as ultrasound wave, i.e. the inaudible sound wave of the mankind.
Should emphasize, the layout shown in Figure 1A is representational, and nonrestrictive.Such as, available laser or other light source replace LED.In the embodiment comprising laser, additional optical element (such as, camera lens or diffusing globe (diffuser)) can be used for widening laser beam (making its visual field be similar to the visual field of video camera).Useful layout can also comprise low-angle for different range and wide-angle illuminators.The usual diffusion of light source instead of mirror point light source; Such as, the LED with the encapsulation of light propagation encapsulating is suitable.
In operation, light source 108,110 is arranged to the region 112 illuminating concern, described region comprises a part for the human body 114 holding instrument alternatively (in this illustration, a hand) or other object paid close attention to, video camera 102,104 is orientated towards region 112, to catch the video image of hand 114.In some embodiments, the operation of light source 108,110 and video camera 102,104 is controlled by image analysis system 106, and described image analysis system can be such as computer system.Based on caught image, image analysis system 106 determines position and/or the motion of object 114.
Figure 1B is the simplified block diagram of computer system 100B, implements the image analysis system 106 (being also referred to as image analyzer) according to a kind of embodiment of disclosed technology.Image analysis system 106 can comprise can catch and any device of image data processing or device feature, or forms with any device of image data processing or device feature by catching.In some embodiments, computer system 100B comprises processor 132, storer 134, camera interface 136, display 138, loudspeaker 139, keyboard 140 and mouse 141.Storer 134 may be used for storing the instruction performed by processor 132 and the input be associated with execution instruction and/or exporting data.Particularly, storer 134 comprises such instruction, and described instruction is conceptually depicted as one group of module, and the work of described module control processor 132 and processor 132 are mutual with other hardware component, and described module illustrates in greater detail later.Operating system instruct perform low level, basic system functions, as the operation of file management and high-capacity storage.Operating system can be/or comprise several operation systems, as the solaris operating system of MicrosoftWINDOWS operating system, Unix operating system, Linux operation system, the operating system of Xenix, IBMAIX operating system, Hewlett-Packard UX operating system, NovellNETWARE operating system, SunMicrosystems, os/2 operation system, BeOS operating system, MACINTOSH operating system, APACHE operating system, OPENACTION operating system, iOS, Android or other Mobile operating system, or other platform operating system.
Computing environment can also comprise that other is removable/irremovable, volatile/nonvolatile computer storage media.Such as, hard disk drive can read or write to irremovable, non-volatile magnetic media.Disc driver can read or write to removable, non-volatile magnetic disk, and CD drive can read or write to removable, anonvolatile optical disk, such as CD-ROM or other optical medium.Can use in Illustrative Operating Environment other is removable/irremovable, volatile/nonvolatile computer storage media includes but not limited to: tape cassete, flash card, digital versatile disc, digital video tape, solid-state RAM, solid-state ROM etc.Storage medium is connected to system bus by removable or irremovable storage device interface usually.
Processor 132 can be general purpose microprocessor, but, according to embodiment, can be microcontroller alternatively, peripheral integrated circuit element, CSIC (customer specifle integrated circuit), ASIC (application-specific integrated circuit), logical circuit, digital signal processor, the such as programmable logic device of FPGA (field programmable gate array), PLD (programmable logic device), PLA (programmable logic array), RFID processor, intelligent chip, maybe can implement other device any of the action of the process of disclosed technology or the layout of device.
Camera interface 136 can comprise such hardware and/or software, described hardware and/or software make can communicate between computer system 100B and video camera (video camera 102,104 as shown in figure ia) and the light source (light source 108,110 such as, in Figure 1A) that is associated.Therefore, such as, camera interface 136 can comprise the signal processor of one or more FPDP 146,148 (video camera can be connected to this FPDP) and hardware and/or software, so that using signal as input be provided on processor 132 perform capturing movement (" capturing movement ") program 144 before, revise the data-signal (such as, for reducing noise or reformatting data) from video camera.In some embodiments, all right transmission signal of camera interface 136, to video camera, such as to activate or to stop using described video camera, is arranged (frame frequency, picture quality, susceptibility etc.) etc. to control video camera.Such as, the control signal that can respond self processor 132 transmits aforementioned signal, and described control signal can respond user's input or other event detected generates.
Camera interface 136 can also comprise controller 147,149, and light source (such as, light source 108,110) can be connected to described controller.In some embodiments, such as, the instruction that controller 147,149 responds from the processor 132 performing capturing movement program 144 supplies working current to light source.In other embodiments, light source can draw working current from external power source (not shown), and controller 147,149 can generate the control signal being used for light source, and described control signal such as indicates light source to open or closes or change brightness.In some embodiments, single controller can be used for controlling multiple light source.
The instruction limiting capturing movement program 144 is stored in storer 134, and upon being performed, these instructions carry out capturing movement analysis to the image supplied by the video camera being connected to camera interface 136.In one embodiment, capturing movement program 144 comprises various module, such as obj ect detection module 152, object analysis module 154 and gesture recognition module 156.Obj ect detection module 152 can detect the out of Memory of the edge of object wherein and/or the position about object by analysis chart picture (such as, by image that camera interface 136 is caught).Object analysis module 154 can analyze the information of the object provided by obj ect detection module 152, to determine 3D position and/or the motion of object (such as, the hand of user).The example of the action can implemented in the code module of capturing movement program 144 is described as follows.Storer 134 can also comprise out of Memory and/or the code module of capturing movement program 144 use.
Display 138, loudspeaker 139, keyboard 140 and mouse 141 can be used for the mutual of assisting users and computer system 100B.In some embodiments, result that posture catches can be interpreted as user's input to use camera interface 136 and capturing movement program 144 to carry out.Such as, user can make gesture, described gesture uses capturing movement program 144 to analyze, and the result of this analysis can be interpreted as to the instruction performing some other program (such as, web browser, word processor or other application program) on processor 132.Therefore, the mode illustrated by way of example, user can use the webpage swiping the current display on display 138 of posture " rolling " up or down, and use and rotate the volume that posture increases or reduce the audio frequency output of loudspeaker 139, the rest may be inferred.
Should be appreciated that computer system 100B is illustrative, variations and modifications are all possible.Computer system can in a variety of manners because usually implementing, described factor comprises server system, desktop system, notebook-computer system, panel computer, smart phone or personal digital assistant etc.Embodiment can comprise other function do not illustrated herein, such as, and wired and/or wireless network interface, media play and/or registering capacity etc.In some embodiments, one or more video camera can be in a computer built-in, instead of be provided as independent parts.In addition, image analyzer can only utilize computer system part subset (such as, as having suitable I/O interface to receive view data and to export the digital signal processor of the processor executive routine code of analysis result, ASIC or fixed function) to implement.
Although the computer system 100B with reference to specific module declaration herein, should be appreciated that, limiting module is convenience in order to illustrate, is not intended to the specific physical layout implying parts.In addition, module need not correspond to physically unique parts.With regard to using the parts of physically uniqueness, as required, the connection (such as, for data communication) between parts can be wired and/or wireless.
With reference to Figure 1A, 1B and 2A, user assumes a position, and described posture is captured as a series of image continuous in time by video camera 102,104.These images are analyzed by gesture recognition module 156, and described gesture recognition module 156 may be implemented as another module of capturing movement 144.Gesture recognition system is known at computer vision field, and can utilize based on 3D model algorithm (that is, volumetric model or skeleton pattern), use the reduced representation of human body or the body part relevant to posture skeleton pattern or based on the model based on image of the deforming template of the body part of being such as correlated with posture or other technology.See such as, the people such as Wu, " gesture recognition of view-based access control model: reviews and prospects " (Vison-BasedGestureRecognition:AReview), is loaded in " based on the communication of posture in man-machine interaction " (Springer, 1999); The people such as Pavlovis, " vision of the gesture in man-machine interaction is understood: reviews and prospects " (VisualInterpretationofHandGesturesforHuman-computerInter action:areview), IEEETrans.PatternAnalysisandMachineIntelligence (19 (7): 677-695, in July, 1997).
Gesture recognition module 156 provides input to electronic installation 214, allow user remotely controlled electronically device 214 and/or handle virtual objects 216 in display virtual environment on a screen 218, described virtual objects is such as prototype/model, block, ball, or other shape, button, lever or other control piece.User can use any position of its health to assume a position, and described position is such as finger, hand or arm.As a part or the independently device of gesture recognition, image analyzer 106 can determine shape and the position of the hand of user in the 3 d space in real time, see, such as, U.S. Application Serial Number 61/587554,13/446585 and 61/724091, it is filed on January 17th, 2012, on March 7th, 2012 and on November 8th, 2012 respectively, and its whole disclosure is incorporated into this by reference.Result, image analyzer 106 not only can identify posture to provide input to electronic installation 214, position and the image of the hand of the user in continuous videos image can also be caught, to determine the feature of the posture in 3d space, and copy this image on display screen 218.
In one embodiment, the data bank of the posture detected with the posture be stored in database 220 as recorded electronic contrasts by gesture recognition module 156, and described database is implemented in ias 106, electronic installation 214 or external storage system 222.(as used herein, term " Electronic saving " comprises the storage in volatibility or nonvolatile memory, the latter comprises disk, flash memories etc., and extends to any calculating addressable storage medium (comprising such as optical memory)).Such as, posture can be stored as vector, that is, the space tracking of mathematically specifying, and described posture record can have the territory of the relevant portion of specifying the user's body of assuming a position; Therefore, the similar track that the hand of user and head are made can store in a database as different postures, can have different explanations to make application program to it.Usually, the track of the posture sensed and the track stored are carried out mathematics contrast, and to find optimum matching, and only when matching degree exceedes threshold value, this posture is identified as the data base entries identified corresponded to.Vector can be necessarily to scale, to be identified as identical posture (namely such as by the large radian of the hand of the user of tracking with little radian, data-base recording corresponding to identical), but gesture recognition module returns the identify label of posture and the value of reflection scaling (scale).Scaling may correspond to the actual posture distance of crossing in time assuming a position, and maybe can be standardized as some typical range.
In some embodiments, gesture recognition module 156 detects multiple posture.With reference to Fig. 2 B, such as, user can make the posture of brandishing arm, simultaneously digital flexion.Gesture recognition module 156 detects and brandishes and curved position 200B, and five serpentine track 332,334,336,338,340 of track 330 and five fingers brandished in record.Every bar track can convert vector to along the Euler's degree of freedom of six in such as Euler space (x, y, z, roll angle, the angle of pitch and crab angle).The vector with amplitude peak such as represents the leading composition of this motion (such as, in this case, brandishing), and remaining vector can be left in the basket.Certainly, the slight movement of finger can be the leading posture being isolated explanation, and the motion of brandishing by a relatively large margin of hand is left in the basket.In one embodiment, vector filtrator (filtering technique can be used to implement) is applied to described multiple vector, to filter out small vector and to identify leading vector.Can repeat, this process of iteration, until identify the leading composition of a vector-motion.In some embodiments, new posture detected at every turn, just generate new filtrator.
If this gesture recognition system 156 is implemented as a part (as game or the controller logic of TV) for application-specific, then database posture record can also comprise the input parameter corresponding to this posture (scaling value can be utilized to carry out bi-directional scaling to it); Gesture recognition system 156 is being embodied as in the general-purpose system of the utility routine that can be used for multiple application program, the special parameter of this application program is omitted: when application call gesture recognition system 156, its posture identified according to its oneself interpretation of programs.
Therefore, with reference to Fig. 2 A, gesture recognition system 156 identifies gesture by reference to database 220, will represent that the signal of the posture identified transmits to electronic installation 214.Device 214 is treated to input signal identified posture and scaling value then, and distributes input parameter value to it; Then, input parameter is used by the application program performed on electronic installation 214, assists the user interactions based on posture.Such as, first user can make its hand move in the mode (such as, making gesture of waving) of repetition or uniqueness, to start the communication with electronic installation 214.When detecting and identify this gesture, signal is sent to the electronic installation 214 that indicating user detects by gesture recognition system 156, and responsively, device 214 presents suitable display (such as, control panel 224).Then, user makes another posture (such as, making its hands movement along " upwards " or " downwards " direction), and this is detected by gesture recognition system 156 again.Gesture recognition system 156 identifies posture and associated scaling value, and sends data to electronic installation 214; Device 214 is then by the input parameter (just looking like the button that user presses on telechiric device) of the action of this information interpretation desired by expression, make user can the data (such as, select interested channel, adjustment audio sound or change the brightness of screen) of display on maneuvering and control panel 224.In different embodiments, device 214 is connected to video-game source (such as, video game console or CD or network video-game); User can make various posture and come to carry out remote interaction with the virtual objects 216 in virtual environment (video-game).The posture detected and scaling are provided to the game of current operation as input parameter, and described game explains them, and perform the action adapted with background, that is, respond this posture and generate screen display.Various parts-the gesture recognition system 156 of this system and the explanation posture of device 214 also generate the executive component of displaying contents-can be independent (as shown in the figure) based on it, or can be organized, or conceptually regarded as in image analysis system 106.
In different embodiments, user successfully start with the communicating of gesture recognition system 156 and electronic installation 214 after, gesture recognition system 156 generates a part for the health that representative detects (such as, hand) cursor 226 or Figure 22 8 (hereinafter referred to as " cursor "), and to be presented on the screen 218 of device.In one embodiment, the motion of the cursor 226 of gesture recognition system 156 in phase on lock-screen 218, to follow the tracks of the actual motion of the posture of user.Such as, when user makes its hand move in upward direction, responsively, the cursor 226 of display also moves upward on the display screen.As a result, the posture of user is directly mapped to the content of display by the motion of cursor 226, makes the behavior of the hand of such as user and cursor 226 be similar to cursor on the mouse of PC and monitor respectively.This permission user assesses the relation between the motion of actual physics posture and the action (such as, being presented at the motion of the virtual objects 216 on screen) caused occurred on a screen 218.Thus, the absolute position of hand is usually inessential to display and control; On the contrary, the relative position of the health of user and/or the direction controlling screen virtual action of motion, such as, the motion of cursor 226.
An example 300A of user interactions is shown in Fig. 3 A.As shown in the figure, user assumes a position to make the cursor 310 of display to move, so that overlapping at least in part with the virtual objects 312 of shown concern.Then, user makes another posture (such as, " finger is clicked "), to select desired object 312.In order to object 312 being labeled as the object that user selects, motion (that is, movements of parts of the body) the possibility demand fulfillment of user completes the predetermined threshold value (such as, 95%) of posture; This value is stored in database 220 or by the current application program run in electronic installation 316 and implements.
Such as, complete the distance that " click " posture can require the finger motion 5 centimetres of user, described " click " posture activates the virtual controlling part being similar to button; When finger motion 1 centimetre being detected, gesture recognition system 314 identifies this posture by itself and data-base recording being carried out mating, and determines the performance level (in this case, 20%) of identified posture.In one embodiment, each posture in database comprises multiple image or vector, and each of described image or vector is associated with the performance level (such as, from 1% to 100%) of the posture made; In other embodiments, the vector of observation and the vector stored maybe are carried out simply contrasting to calculate by interpolation by performance level.The performance level of the posture made (such as, user makes its chirokinesthetic amplitude) can be present on screen, in fact, the assessment of the performance level of posture can be presented to application program by what device 316 ran instead of is processed by gesture recognition system 314.
Such as, electronic installation 316 can show hollow circular icon 318, when user makes its finger motion approaching device 316 (user makes click or " touch " posture), this device receives simple motion (position change) signal from gesture recognition system 314, now presents a kind of color of application program or the described hollow circular icon of multiple color filling.How long the motion of the degree indicating user that circle is filled completes this posture (or how far the finger of user has left its original position) in addition.When user makes click completely or touches posture, circle is completely filled, and this may cause such as being labeled as virtual objects 312 by the object selected.
In some embodiments, this device temporarily shows the second instruction (such as, changing the shape of indicator, color or brightness), the selection of this object for confirmation.Thus, the confirmation instruction of posture performance level and/or Object Selection enables user predict the definite moment selecting virtual objects easily; Correspondingly, user can handle selected screen virtual (on-screen) object in mode intuitively subsequently.Although discussion here focuses on fill open circles 318, disclosed technology is not limited to the image that can indicate any particular type of the performance level of made posture be presented on screen.Such as, also can use gradually by the brightness of the hollow stem 320 of color filling, color gradient 322, color or any suitable indicator for performance level that the posture made by user is shown, it is all in the scope of disclosed current technology.
Gesture recognition system 314 is based on the shape at the position of assuming a position of the health of the user in caught 2D image and position continuous detecting and identify user's posture.The 3D rendering of posture can be reconstructed by the analysis shape identified of body part that user assumes a position in the image obtained continuously and the temporal correlation of position.Because the 3D rendering of reconstruct can accurate detection and Identification posture (such as, making finger motion be less than the distance of 1 centimetre) by a small margin in real time, so gesture recognition system 314 provides high detection susceptibility.In different embodiments, once this posture is identified and instruction associated with it is identified, signal is just sent to device 316 to activate the screen virtual indicator of the performance level of the posture of display user by gesture recognition system 314.Screen virtual indicator provides feedback, and described feedback allows user to use motion in various degree to control electronic installation 316 and/or the virtual objects 312 shown by manipulation.Such as, the posture of user can be such as large jump the same as height size or the posture by a small margin as finger click.
In one embodiment, once object 312 is marked as alternative, then object 312 and screen virtual cursor 310 lock together, to reflect the motion that user makes subsequently by described gesture recognition system 314.Such as, when user makes its hand move in downward direction, responsively, display cursor 310 with select virtual objects 312 also on the display screen together with move downward.Again, this allows user accurately to handle virtual objects 312 in virtual environment.
In another kind of embodiment 300B, when virtual objects is marked as the entry of selection, the subsequent motion of user is converted to by calculating the analog physical acting force being applied to selected object.With reference to Fig. 3 B, user's such as make its forefinger travel forward selection that the distance of a centimetre come virtual objects 330; This selection can confirm by filling the open circles 332 be presented on screen completely.Then, its forefinger can travel forward another centimetre by user.When detecting this motion, gesture recognition system 314 is converted into dummy activity power; Physically based deformation analogy model, the degree of freedom of body kinematics, the quality of body part and movement velocity, gravity and/or other correlation parameter any can carry out transformation power.The application program of the generating virtual object 330 that device 316 runs by following come the data of responsive force: the behavior presenting the virtual objects 330 affected by acting force based on the motion model comprising newton's physical principles.
Such as, if user movement be amplitude in preset range (such as, being less than 1 centimetre) relatively little motion and/or relatively slow motion, then the acting force changed makes the shape of selected object 330 be out of shape; But, if (namely the motion of user exceedes determined scope, be greater than 10 centimetres) or threshold velocity time, then device 316 changed acting force is treated to greatly (that is, than simulation static friction acting force large) to being enough to make selected object 330 move.When receiving thrust, device 316 present the motion of application program based on motion model simulated object 330; Then on screen, this motor behavior is upgraded.Present application program and can take other action to virtual objects 330, such as, button, operating rod, hinge, handle etc. are stretched, bend or apply Mechanical course.As a result, the acting force of simulation copies the effect of equivalent acting force in real world, and making mutual is foreseeable and real to user.
Should emphasize, the aforementioned function presented between application program that gesture recognition system 314 and device 316 run divides and is only example; In some embodiments, these two parts more close-coupled, even unification, thus make not to be simply general action force data is passed to application program, but gesture recognition system 314 has World Affairs (worldknowledge) to the environment be presented on device 316.By this way, gesture recognition system 314 can object is specific (object-specific) knowledge (such as, friction and inertia) be applied to force data, the motion making directly to calculate user to the physical effect of the object presented (instead of based on general action force data, described general action force data is generated by gesture recognition system 314, and is processed based on object one by one by device 316).In addition, in different embodiments, capturing movement 144 runs on device 316, and parts 314 are simple sensors, and it only transmits image (such as, the image of high-contrast) to device 316 to be analyzed by capturing movement 144.In such embodiment, this capturing movement 144 can be stand-alone utility, described stand-alone utility pose information is supplied to run on device 316 present application program (such as play), or it is as discussed above, also can be integrated in and present in application program (such as, game application can arrange suitable capturing movement function).The division of the calculating responsibility between this system 314 and device 316 and between hardware and software represents design alternative.
Fig. 3 C illustrates exemplary process 300C, and described method, for supporting the posture of user and electronic installation mutual, particularly relates to and monitors posture performance level, so that the action of deferrable screen virtual is until posture completes.In the first action 352, user starts the communication with electronic installation by assuming a position.In the second action 354, detect posture by gesture recognition system.In the 3rd action 356, the posture of identification and the posture record stored in a database contrast, to identify this posture and real-time assessment performance level by gesture recognition system.Then, signal is sent to electronic installation (in the 4th action 358) by gesture recognition system.(as previously mentioned, performance level function can be implemented on device, instead of is implemented by gesture recognition system, and the system of the latter only provides motion tracking data.) based on this signal, the screen virtual indicator (in the 5th action 360) of the performance level of this electronic installation display reflection user's posture.If performance level exceedes threshold value (such as, 95%), then electronic installation and/or the virtual objects be presented on screen are carried out handling (action 362,364) based on the posture currently or later made by user subsequently in time.
With reference to figure 4A, implement in 400A at one, the motion 410 of object 412 shown on screen 414 is determined in the absolute space displacement based on the actual motion of user.Such as, hand 416 can be slided into one centimetre, right by user first as indicated at 418; When detecting and identify this posture, signal is sent to the electronic installation 422 of instruction motion by gesture recognition system 420, described signal interpretation is input parameter by this device, and responsively, take action make cursor or virtual objects 412 on screen 414 in the same direction move (that is, being rendered as motion) such as 100 pixels.Relation between the physical motion of user and the motion presented can be arranged by such as changing the scaling factor for the posture be associated stored by gesture recognition system 420 by user.If gesture recognition system 420 is integrated with present application program, then user can use posture to carry out this change.
Such as, user can specify: larger screen virtual motion (that is, crossing more pixel) that the motion that cursor or object 412 respond given hand is made.First user can activate by making clear and definite posture the ratio control panel 424 be presented on screen.Control panel 424 can be rendered as such as slide block, circular dial or any suitable form.User makes another posture subsequently and carrys out resize ratio based on the pattern of scaling control panel 424.If scaling control panel is slide block, then user slides its finger to change this ratio.In another embodiment, scaling control panel is not had to be presented on screen; Ratio is the follow-up stance adjustment based on this user.As another example, user can by opening its fist or making its thumb and forefinger separate increase scaling, by clenching fist or making its forefinger reduce scaling to thumb movement.Although discussion here concentrates on hand or finger gesture for illustrative purposes, disclosed technology is not limited to any posture made by any specific part of human body.Also any suitable posture can be used for the communication between user and electronic installation, and it is within the scope of disclosed current techniques.
In other embodiments, ratio adjustment is with telechiric device (user is controlled by pressing button) or uses wireless device (such as panel computer or smart phone) to realize.Different scalings can be associated with each posture (that is, scaling is local, can be different to each posture) and store in given pose record in a database.Alternately, scaling goes for several or all postures (that is, scaling is overall, identical at least several posture) of being stored in gesture data storehouse.
Alternately, the relation between physics and screen virtual motion is determined based on display and/or the characteristic of environment that presents at least in part.Such as, with reference to figure 4B, in an embodiment 400B, (video camera) image 430 of the user obtained has the brightness value of the matrix form of M × N number of pixel, and (presenting) frame of electronic installation 422 display screen has X × Y pixel.When user make in camera review cause the horizontal shift of m pixel (or m pixel distance) and n pixel vertical displacement (or n pixel distance) wave posture 420 time, relative level and vertical motion are set to m/M respectively, n/N, for bi-directional scaling.Respond this gesture, the cursor on display screen 414 or object 412 can be made to move (x, y) pixel, and wherein x and y is confirmed as x=m/M × X respectively in the most simple form, y=n/N × Y.But, even if be the scaling (it is adjusted by for the environment of user and the relative size of display screen) of unit (1:1) substantially to show, usually the resolution, visual angle etc. of the distance of the position of video camera and user, focal length, imageing sensor will also be considered, result, the amount of x and y is multiplied by constant amount, causes hinting obliquely at (affinemapping) from " user's space " to the essence of the image presented is affine.Again illustrate, described constant can be adjusted to the response amplifying or reduce screen virtual motion.This sensation that reality can be provided to user alternately while making object move in virtual environment that user and the virtual objects 412 be presented on screen carry out.
Scaling relationships between the actual motion of user and the action occurred on screen caused may cause the challenge in performance, particularly when user can limited space time.Such as, when two kinsfolks be sitting in together sofa plays display video-game on TV time, the effective range of the motion of each user is restricted due to the existence of other user.Therefore, the scaling factor can be changed to reflect restricted range of movement, correspond to larger screen virtual motion to make physical motion by a small margin.This can occur automatically when multiple adjacent user being detected by gesture recognition system.In different embodiments, scaling also can be depending on the content presented on screen.Such as, present in environment having the busy of multiple object, little scaling may be expected, with run user precise navigation; And for more simply or more open environment (as following situation: user pretends to trundle or brandish golf clubs, and the action detected is present on screen), preferably large scaling.
As mentioned above, the appropriate relation between the motion motion of user and screen shown depends on the position of user relative to record video camera.Such as, the actual motion m of user and the ratio of the Pixel Dimensions M of image of catching depend on the visual angle of video camera and the distance between video camera and user of implementing in gesture recognition system 420.If visual angle is wide or user in the distance away from video camera, then the relative motion (that is, m/M) detected of the posture of user is if be less than the so not wide or user in visual angle closer to relative motion when video camera.Therefore, in the former case, virtual objects response posture is moved very little on screen, and in the case of the latter, virtual objects moves too much.In different embodiments, the actual motion of user adjusts based on the distance such as between user and gesture recognition system (it can be followed the tracks of by finding range) automatically roughly to the ratio of the corresponding motion be presented on screen; This makes user can move towards or away from gesture recognition system, and does not destroy user and obtained actual and present the impression intuitively of the relation between motion.
In different embodiments, when identifying posture but detected user movement is very small (namely, lower than predetermined threshold) time, gesture recognition system 420 switches to high sensitive pattern from low sensitivity detecting pattern, in described high sensitive pattern, reconstruct the 3D rendering of gesture exactly based on obtained 2D image and/or 3D model.Because high sensitive gesture recognition system can accurately detect by the sub-fraction of health (such as, a finger) experiencing small oscillating movements made is (such as, be less than several millimeters), so the ratio of the motion caused that the actual motion of user and screen show can on a large scale in adjust, such as, between 1000:1 and 1:1000.
Dynamically adjust its actual motion according to the user of the embodiment of disclosed current techniques shown in Fig. 4 C and be presented at the exemplary process 400C of the relation between the object motion caused on the screen of electronic installation.In the first action 452, user starts the communication with electronic installation by assuming a position.In the second action 454, this posture is detected, and by gesture recognition system identification.In the 3rd action 456, gesture recognition system identifies the instruction relevant with described posture by the posture detected being carried out contrasting to the posture stored in a database.Then, gesture recognition system is determined the actual motion of user based on described instruction and is presented at the ratio (in the 4th action 458) of the dummy activity caused on the screen of device.Gesture recognition system transmits the signal of indicator to described electronic installation (in the 5th action 460).In the 6th action 462, when receiving signal, electronic installation shows dummy activity based on the motion subsequently of determined ratio and user on screen.
It is mutual with it to facilitate that system 100B can present various user interface elements via display 138 to user.User interface elements can be that response creates from some posture (or input of other form) of user, or created by software program that processor 132 runs (such as, capturing movement program 144 or other application program or games).In one embodiment, display 138 is provided with plate-like " disk " (puck) user interface elements 502 on display 138, as shown in Figure 5A.Gesture recognition system 314 as above identifies the posture from user, and according to the embodiment of disclosed technology, disk 502 is correspondingly moved.In one embodiment, the representative symbol 504 of the hand of user also appears on display 138; When hand representative symbol 504 touch disks 502 side 506 and when making it move along first direction 508, disk moves in the corresponding direction 510 corresponding to the motion representing symbol 504.User can touch disk 502 in its side in any position similarly via representing symbol 504, makes the posture of " pushing away " disk 502, thus disk 502 is moved in the corresponding direction.
Illustrate that the embodiment of disclosed technology is in fig. 5 illustrative example; Disclosed technology is not limited to only this embodiment.Screen 138 can there is no the representative symbol 504 of the hand of user; Gesture recognition from user can be the posture being intended to push away disk 502 by gesture recognition system 314, and does not show and represent symbol 504, and user can use the other parts of its hand (such as, palm) or use other body part or Object Creation posture.In other embodiments, if be shown, represented symbol 504 and can comprise other object, as pointer or paintbrush, or other body part of user.Disk 502 can be any size or shape, such as circular, square, oval or triangle.
The position of disk 502 can be used as inputing to computer program, display is arranged, the input of game or other this type of software application any or other variable.In one embodiment, x position control first variable of disk 502, y position control second (the relevant or irrelevant) variable of disk 502.Fig. 5 B shows such application program; Gray scale selects widget 512 to comprise disk 502.By promoting disk 502 via one or more posture, user can select gray-scale value on selection widget 512.The gray-scale value such as corresponding to the center of this disk 502 can be selected, thus so that for such as computer aided painting program.Select widget 512 can comprise multiple any other this type of value (such as, color), for by disk 502 from wherein selecting.
Disk 502 can respond user's posture and move with the different modes of any number.Such as, disk 502 can continue a period of time of moving after user stops promoting it, and can slow down until stop according to virtual mass with the virtual friction factor (or other similar value) of widget 512.Disk 502 can only user posture with one side contacts and the further motion of user exceed minimum threshold distance (such as, disk is " viscosity ", need posture to cover the distance of initial minimum, " viscosity could be broken away from ") just start motion.In one embodiment, when the posture of user stops contacting with disk 502, disk 502 ties a point on widget 512 by virtual " spring ".The top surface of pressing as the disk of traditional button can cause further action occurs.In one embodiment, after the top surface of pressing disk 502, user can make rotation posture, and gesture recognition system 314 can correspondingly rotating circular disk (and correspondingly changing the parameter of application program).
In other embodiment of disclosed technology, user can use posture to create additional user interface elements, carries out alternately subsequently with these elements.Such as, gesture recognition system 314 can detect user and made circus movement and be interpreted as being desirably on display 138 by this circus movement create button with finger (or other object).Once be created, user can carry out alternately with user interface elements (pass through, such as, pressing button), and make thus to perform the function be associated.This function can be determined by following: the position or other user input that the background of display 138, display 138 create user interface elements.
In another embodiment, gesture recognition system 314 responds user's posture, creates slide block, and described user's posture is such as that stretching, extension two is pointed (such as, its forefinger and middle finger) and assume a position (motion such as, being parallel to the plane of display 138) with finger.Once be created, slide block just can be used for controlling suitable application program (such as, the page of scroll file, menu or list or part).
In another embodiment, gesture recognition system 314 by user forward or reverse Fingers be interpreted as " mouse click " (or other similar selection or confirm order) to posture.User's its hand can be made to refer to display 138 or point to display 138, and finger is moved along the direction of its major axis towards display 138; If the distance of finger motion is more than a threshold value (such as, 1,5 or 10 centimetre), then this posture is interpreted as mouse click by gesture recognition system 314.In one embodiment, only when have at least a certain proportion of motion (such as, 50%) be Fingers to direction time, posture be just interpreted as mouse click.Similar posture towards the motion in the direction away from display 138 can be interpreted as another or different users input.In one embodiment, posture is forward left click mouse, and reverse posture is that right mouse is clicked.
The posture of other user, the motion of other object or its combination can be caught by collective and for determining twiddle factor.Gesture recognition system 314 can analyze all or most of motion be present in the image of catching of a sequence, and generates single twiddle factor (being expressed as the rotation of such as some degree) based on it.In one embodiment, gesture recognition system 314 the center of caught motion or near selection focus, calculate the rotation amount of each Moving Objects relative to this focus, and calculate average rotation amount based on it.The motion of different object can based on its acceleration, speed, size, near the degree of display 138 or other similar Factors Weighting in mean value.Then, single twiddle factor can as the input inputing to the program run in system 100B.
As mentioned above, gesture recognition system (such as, being shown in the system 100 of Figure 1A) uses one or more video camera 102,104 to catch the image of the object of such as hand 114; This object can use one or more light source 108,110 to throw light on.Obj ect detection module 152 detected object, the identification module 156 of posture detects the posture using object to make.Once detect, posture is just imported into electronic installation, and described electronic installation (such as, can be handled virtual objects) in a different manner and use posture.But may detect many different types of postures, the application program run on the electronic device may not use or not need each posture detected.Transmit and may not be caused unnecessary complicacy by the posture that uses to application program and/or consume the bandwidth of the link between unnecessary application program and gesture recognition module 156.
In one embodiment, only gesture recognition module 156 catch posture a subset be transmitted to the application program run on the electronic device.As shown in Figure 1A, the posture identified can send posture filtrator 158 to from gesture recognition module 156, and filters based on one or more characteristics of described posture.Be transmitted to application program by the standard gestures of filtrator 158, do not transmitted by the posture of filtrator and/or deleted.Posture filtrator 158 is illustrated as the standalone module in storer 134, but disclosed technology is not limited thereto embodiment; The function of filtrator 158 can be attached in gesture recognition module 156 whole or in part.In different embodiments, gesture recognition module 156 does not consider that arranging of filtrator 158 identifies all postures detected, or according to the subset arranging recognition detection posture of filtrator 158.
Fig. 6 illustrates that the embodiment according to disclosed technology filters the process flow diagram 600 of the method for posture.In one embodiment, the method posture of concern and the posture of non-interesting differentiated in three-dimensional (3D) sense space is described.Described method is included in the input receiving the datum characteristic limiting one or more benchmark posture in action 652, in action 654, electronic sensor is used to detect the one or more actual posture in three-dimensional (3D) sense space and use the data determination actual characteristic from electronic sensor, in action 656, actual posture and benchmark posture are carried out contrasting and determines one group of posture paid close attention to, and in action 658, described one group of posture paid close attention to and corresponding pose parameter are supplied to further process.
In one embodiment, when datum characteristic is posture path, the actual posture of the straight line path such as laterally brandished is interpreted as one group of posture paid close attention to.According to a kind of embodiment, when datum characteristic is posture speed, the actual posture with high speed is interpreted as one group of posture paid close attention to.According to a kind of embodiment, when datum characteristic is posture form, use with specific Fingers to the actual posture made of hand be interpreted as one group of posture paid close attention to.According to a kind of embodiment, when datum characteristic is posture form, the actual posture of the hand of clenching fist is interpreted as one group of posture paid close attention to.
In another embodiment, when datum characteristic is the shape of posture, the actual posture that hand is thumbed up is interpreted as one group of posture paid close attention to.According to a kind of embodiment, when datum characteristic is posture length, brandishes posture and be interpreted as one group of posture paid close attention to.In another embodiment, when datum characteristic is the position of posture, the actual posture that the distance apart from described electronic sensor is less than threshold value is interpreted as one group of posture paid close attention to.When datum characteristic is the duration of posture, the actual posture of duration threshold time cycle in 3D sense space, but not the actual posture that the time continued in 3D sense space is less than threshold time period is interpreted as one group of posture paid close attention to.Certainly, a more than characteristic can be used at one time.
The characteristic of filtrator 158 can be restricted to and adapt to specific application program or one group of application program.In different embodiments, feature can receive from menu interface, read, via API or other similar method communication any from command file or configuration file.Filtrator 158 can comprise the pre-configured characteristic of multiple groups, and one that allows user or application program to select in the plurality of group.The example of filter characteristic comprises: the path (such as, filtrator 158 by means of only the posture such as with relatively straight path, and can stop the posture with curved path) that posture is made; The speed (such as, filtrator 158 by having the posture of high speed, and can stop the posture with low velocity) of posture; And/or the direction of posture (such as, filtrator can by having the posture of side-to-side movement, and stop there is the posture of moving backward forward).Further filter characteristic can based on the tendency of form, shape or the object of assuming a position; Such as, filtrator 158 can by means of only posture, the hand of clenching fist or the hand opened using the hand pointed to specifically to point (such as, nameless) to make.This filtrator 158 can further by means of only the posture made by thumb posture up or down, such as, for application program of voting.
The filtration undertaken by filtrator 158 can be implemented as described below.In one embodiment, the posture detected by gesture recognition module 156 is assigned with one group of characteristic, and often organize characteristic and comprise one or more characteristic (such as, speed or path), posture and characteristic remain in data structure.Filtrator 158 detects the characteristic which is assigned with and meets its filtering feature, and the posture by being associated with these characteristics.One or more application program can be turned back to via API or via similar method by the posture of filtrator 158.Alternatively, or in addition, this posture to may be displayed on display 138 and/or such as, in menu (be, hands-on instruction IF application program).
As mentioned above, the storehouse of the motion of detected object and known posture contrasts by gesture recognition module 156, if there is coupling, then returns the posture of coupling.In one embodiment, the posture that user, programmer, application developer or other user limit is supplemented, is revised or replace known poses.If gesture recognition module 156 identifies the posture that user limits, then this posture is returned to one or more program by API (or similar approach) by it.In one embodiment, refer again to Figure 1A, posture arranges the motion of module 160 based on the input shielding posture of the characteristic of restriction posture, and returns the group of the posture with matching properties.
The characteristic that user limits can comprise the attribute of any amount of various different posture.Such as, can to comprise the path of posture (such as, relatively straight, curve for described characteristic; Circular with swipe); The parameter (such as, minimum or maximum length) of posture; The spatial character (such as, the area of space of described posture generation) of posture; The time response (such as, the minimum or maximum duration of posture) of posture; And/or the speed of posture (such as, minimum or maximal rate).Disclosed technology is not limited to these attributes, and other attribute any of posture is all within the scope of disclosed technology.
Conflict between the posture that user limits and predetermined posture can solve in any number of ways.Programmer can such as specify ignore predetermined posture.In another embodiment, the posture that user limits is determined to have precedence over predetermined posture, if make posture mate the two simultaneously, then returns the posture that user limits.
In different embodiments, postural training system help application developer and/or final user limit the posture of oneself and/or make posture adapt to oneself demand and hobby-in other words, outside the posture exceeding programming or " encapsulation " in advance.Postural training system can be passed through normal language (such as, a series of problem) and carry out alternately with user, wishes to limit user better the action that system can identify.By answering these problems in the pre-installation process illustrated, user can be defined for parameter and/or the parameter area of corresponding posture, thus solves ambiguity.Advantageously, the method provides reliable gesture recognition, and do not need usually to guess with needing computing machine the algorithm complexity that answer is associated; Therefore, it contributes to reducing software complexity and cost.In one embodiment, once system has been trained identify given pose or action, then it can create an object (such as, file, data structure etc.) for this posture or action, assists afterwards to identify posture or action.This object can be used by application programming interface (API), and can be used by developer and non-development of user.In some embodiments, data are shared by developer and non-developer user, or can be shared by it, thus contribute to cooperation etc.
In some embodiments, postural training is conversational, interactive and dynamic; According to the response that user provides, ensuing problem or next parameter to be specified can be selected.These problems can present to user with vision or audio form (such as, as display text on the computer screen, or being exported by loudspeaker).The response of user can provide equally in various patterns, such as by the selection of the text event detection of keyboard, graphical user interface elements (such as, use mouse), voice command, or, in some cases by basic form that this system has skillfully identified.(such as, " thumb upwards " or " thumb is downward " posture may be used for answering and are anyly and no problem.) in addition, as illustrated by following example, some problem causes action (specifically, carrying out exemplary posture (such as, the frontier point of the scope of typical posture or posture)) instead of gives the answer by word of mouth.In this case, this system can utilize such as machine learning method to carry out the selected image from video camera or catch the relevant information of video flowing of action.
Fig. 7 is the process flow diagram 700 of the method customizing posture explanation for specific user.In one embodiment, describe for specific user customize posture explain method.Described method comprises: point out user select the characteristic value for the posture in free space and receive the characteristic value selected in action 752, the characteristic boundaries pointing out user to perform posture in three-dimensional (3D) sense space in action 754 concentrates demonstration (characteristicfocuseddemonstrationofboundaries), in action 756, from the boundary set of being caught by electronic sensor, one group of parameter of posture is determined in demonstration, stores this group parameter and corresponding value of being used for gesture recognition in action 758.
Described method also comprises by the explanation of following test to given pose: prompting user makes the complete posture demonstration of given pose in 3D sense space, one group of parameter of given pose is determined from the complete posture demonstration of being caught by electronic sensor, the characteristic value of this group parameter of given pose to demonstrate from boundary set one group of corresponding parameter of determining and selection is contrasted, and 760 results contrasted to user report in action, and receive the confirmation whether correct to the explanation of given pose.
Described method also comprises the questionnaire used for pointing out user to select the characteristic value of posture.In one embodiment, questionnaire prompting user selectivity characteristic value is used to comprise: to receive the minimum threshold time period of posture in 3D sense space from user, do not explain posture before this.In another embodiment, perform characteristic boundaries concentrate demonstration comprise user use specific finger make finger to posture as posture form.Perform characteristic boundaries to concentrate demonstration also to comprise user to use hand to make the form of fit as posture.Perform characteristic boundaries concentrate demonstration also comprise user use hand thumb upwards or the downward posture of thumb as the shape of posture.
In one embodiment, perform characteristic boundaries concentrate demonstration comprise user with hand make thumb upwards or the downward posture of thumb as the shape of posture.According to a kind of embodiment, execution characteristic boundaries is concentrated demonstration to comprise user and is made kneading posture to set minimum posture distance as a posture size.In another embodiment, perform characteristic boundaries to concentrate demonstration also to comprise user to make and brandish posture so that arranging maximum posture distance is a posture size.
In another embodiment, perform characteristic boundaries and concentrate demonstration to comprise user to make and point the posture of flicking to arrange the fastest posture movements.In one embodiment, perform characteristic boundaries to concentrate demonstration to comprise user to make and brandish posture to arrange the slowest posture movements.Perform characteristic boundaries to concentrate demonstration to comprise user to make and laterally sweep posture to arrange straight line posture path.According to a kind of embodiment, perform characteristic boundaries and concentrate demonstration to comprise user to make circle and sweep to arrange circular posture path.
Fig. 8 A, 8B and 8C show according to a kind of a series of problem for exemplary training guidance flow process of embodiment and prompting 800A, 800B, and 800C.As shown in the figure, in action 852 and 854, first user is asked and relate to how many hands and finger in posture.Then, in action 856, total time period of the minimum and maximum time quantum determination posture that system can be taken by inquiry posture.In action 858, for maximum amount, set lower cut-off, as one second.
Ensuing several mutual in, whether the size of system interrogation user's posture, speed and direction important.In action 860, if size is important, the minimum and maximum reasonable action requiring user to demonstrate.Result exemplarily, the recognizer (namely inputting the object of establishment at training period based on user) automatically generated can quantize the size of posture subsequently and the posture of normalized size exports.Relevant training parameter comprises the parameters such as instruction motion, path, start and stop point, arc length and/or it combines, and/or from the parameter of aforementioned calculating.If size is unimportant, then posture is always standardized and does not consider size.In this case, relevant training parameter comprise standardized movement parameter (comprising: such as, motion, path, start and stop point, arc length etc. and/or its combine, and/or from the parameter of above-mentioned calculating).
In action 862, if speed is important, then ask the motion that user's demonstration is the fastest and the slowest.From observed motion, system undisturbedly can check acceleration range.Speed demonstration makes the recognizer energy output speed (such as, based on the Fourier transform of time variable speed along posture, this allows the intrinsic speed of identification data in frequency field) automatically generated.Relevant training parameter comprise translation distance (as Euclidean distance, that is, (dx 2+ dy 2+ dz 2) 1/2) and duration window (that is, posture continues span correlation time that how long indicates for analyzing).If speed is inessential, then posture is velocity standard.In order to characterize the time aspect of posture, the time is converted to space, namely, uses unified sampling (such as, moved in one direction along with the time in a position of hand).Then posture is stretched, is shunk, and is matched to template to extract about speed As time goes on information.Training parameter comprises obtained bent curvature of a curve and torsion.
In action 864, if the direction of posture is important, then user is required the various reasonable and various irrational direction of demonstration.Result, the recognizer of automatic generation is activated, to export following information: whether posture is issued, determinacy level and/or mistake and/or kinematic parameter (such as, motion, path, start and stop point, arc length, range of translation etc. and/or its combination, and/or from the parameter that its combination calculates).If direction is inessential, then training parameter is simple curvature and torsion.
In addition, in action 466, user is required to determine whether to receive careless posture should.If accepted, then system requirements user demonstrates very careless but still is acceptable posture.Otherwise the posture of attempting by requiring user to demonstrate can to accept reluctantly and unacceptable posture are determined that what is acceptable boundary by system.
Finally, in action 868, at all correlation parameters all after training period is set up, the gesture recognition ability of test macro.User may be required assume a position (system is just trained the posture of identification or other posture).For indicating beginning and the end of posture, user can press the space bar on such as keyboard.After user assumes a position, system indicate its by this gesture recognition be whether before by trained, and ask user to confirm or correct.This test can be repeatedly.The result of (such as, averaging) multiple successful test or user can be combined and select a best result.Above-mentioned mutual certain just example.Other embodiment with different order inquiry problem or prompting, and/or can be pointed out with different orders, or inquires additional or different problem.
Thus, the above-mentioned 3D user interaction techniques illustrated herein enables user control intuitively and operating electronic device and virtual objects by making body gesture simply.Because gesture recognition system assists the reconstruct 3D rendering presenting posture with high detection susceptibility, be real-time acquisition for the dynamic user interactions of display and control, there is no excessive computational complexity.Such as, user dynamically can control the relation between its actual motion to the corresponding action be presented on screen.In addition, this device can display screen visual indicators, to reflect the performance level of the posture of user in real time.Therefore, disclosed technology makes user dynamically to carry out alternately with the virtual objects be presented on screen, and advantageously enhances the sense of reality of virtual environment.
Term as used herein and expressing be used as illustrative term and expression, and nonrestrictive, and when using these terms and expressing, be not intended to any equivalent or its part of getting rid of feature that is shown and that illustrate.In addition, some embodiment of disclosed technology has been described, it is evident that, those of ordinary skill in the art thinks, can use other embodiment in conjunction with concept disclosed herein when not departing from the spirit and scope of disclosed technology.Therefore, illustrated embodiment should be considered to be only illustrative instead of restricted in all respects.
Embodiment
In one embodiment, the method significant posture in three-dimensional (3D) sense space and close insignificant posture differentiated is described.Described method comprises: detect arm in 3D sense space and the wrist of attachment and the position of finger by using electronic sensor, when arm is when moving, bending and total track of arm posture of wrist and finger is differentiated, from the space tracking brandishing posture that the position calculation arm of a series of detection is made, from the space tracking of curved position of the position calculation wrist detected and/or finger, and determine to brandish the whether leading curved position of posture based on the amplitude of corresponding space tracking.Wrist and digital flexion refer to finger towards and/or inwardly and/or outwards moving away from wrist.In another embodiment, what arm was made brandish posture refers to the inside and/or outside stretching, extension of arm from side to opposite side.Described method also comprises the response triggered leading posture, and does not trigger the response to non-dominant posture.
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.For brevity, the combination of disclosed feature is not in this application enumerated separately, and does not repeat the feature of each basic group.Reader will understand, and how the feature described in this part can easily combine with the many groups essential characteristic being identified as embodiment.
In one embodiment, the amplitude of brandishing the space tracking of posture is determined by the distance of crossing when making and brandish posture at least partly.In another embodiment, the amplitude of the space tracking of curved position is determined by the curling degree (scaleofcurling) pointed at least in part.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium stores instruction, and described instruction can be performed by processor to perform above-mentioned any method.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, the method for two postures simultaneously made distinguishing the single object stemmed from 3D sense space is described.Described method is comprised and being differentiated by following total track by wrist and digital flexion and arm posture when arm motion: use electronic sensor to detect arm in 3D sense space and the wrist of attachment and the position of finger, from the space tracking brandishing posture that the position calculation arm of a series of detection is made, wherein, the amplitude of space tracking is determined by the distance of crossing when making and brandish posture at least partly, calculate the space tracking of the curved position of wrist and/or finger, wherein, the amplitude of the space tracking of curved position is determined by the degree of freedom between the curling degree pointed and finger at least in part, and assess the amplitude of each space tracking and the amplitude based on space tracking determines leading posture.Wrist and digital flexion refer to finger towards and/or inwardly and/or outwards moving away from wrist.In another embodiment, what arm was made brandish posture refers to the inside and/or outside stretching, extension of arm from side to opposite side.Described method also comprises according to the response of leading posture triggering to total track.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium storage can be performed the instruction of above-mentioned any method by processor.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, the method for do not respond user posture input with not considering the position consistency of user in 3D sense space is described.Described method comprises by the following scaling automatically adjusting the response caused in posture in physical space and posture interface: calculate control object is attached to the video camera at posture interface distance apart from electronics, visual angle (apparentangle) bi-directional scaling motion in camera coverage crossed apart from the distance of video camera based on control object is to the move distance of bi-directional scaling, and the move distance of adjustment response and the bi-directional scaling of the posture in reflection physical space instead of the ratio at visual angle of crossing automatically.
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.
Described method also comprises: when the visual angle of crossing is lower than threshold value, reduces the screen virtual response at posture interface.Described method also comprises: when the visual angle of crossing is higher than threshold value, increases the screen virtual response at posture interface.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium storage can be performed the instruction of above-mentioned any method by processor.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, the method for the response of the virtual objects at the posture interface in adjustment 3D sense space is described.Described method comprises by the posture in following adjustment physical space and the response ratio between the response of the virtual objects caused in posture interface: the quantity based on described virtual objects calculates the density of the virtual objects at posture interface, and, the density of the virtual objects in response posture interface, automatically the screen virtual response of adjustment virtual objects and the ratio of posture.
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.
Described method also comprises: when content density is higher than threshold value, response given pose, automatically specifies the low screen virtual response of virtual objects.Described method also comprises: when content density is lower than threshold value, response given pose, automatically specifies the high screen virtual response of virtual objects.
In another embodiment, the method as one man responding the posture from multiple user and input in 3D sense space is described.Described method comprises and automatically adjusts from the posture in the physical space of multiple user and the response ratio between the response caused in the posture interface of sharing by following: based on the user interval in the interval calculation 3D sense space of the user detected at 3D sense space, and when explaining the move distance of the posture in physical space, response user interval, adjusts the ratio of the screen virtual response at the posture interface of sharing automatically.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium stores instruction, and described instruction can be performed by processor to perform above-mentioned any method.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, describe detection user and whether intend to carry out mutual method with the virtual objects in 3D sense space.Described method comprises: use electronic sensor to detect the click posture of the finger in 3D sense space, and determines whether click posture to be interpreted as carrying out alternately with the virtual objects in 3D sense space according to the performance level clicking posture.The click posture of finger refers to stretching downward or upward of finger and different fingers keeps stretching, extension or curling.Describedly determine to comprise: calculate finger and make the distance of crossing when clicking posture, click recognition to determine that the calculated posture corresponding to the distance clicking posture completes value, and is respond the posture exceeding threshold value to complete value manipulation virtual objects by access gesture data storehouse.
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.
In one embodiment, gesture data storehouse comprises the track of different postures and corresponding posture completes value.In another embodiment, described method also comprises by being carried out contrasting the performance level calculating and click posture with at least one space tracking be stored in gesture data storehouse by the space tracking clicking posture.It also comprises clicks the performance level of posture by following measurement: click posture and be associated to making the interface element representing virtual controlling, and make click posture time real time modifying interface element.In another embodiment, described method also comprises hollow circular icon as interface element, and clicks posture by response and fill circular icon gradually and carry out real time modifying icon.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium storage can be performed the instruction of above-mentioned any method by processor.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In one embodiment, describe detection user whether to intend to carry out mutual method with the virtual objects in 3D sense space.Described method comprises: use electronic sensor to detect the click posture of the finger in 3D sense space, response detects clicks posture, activate the screen virtual indicator that the performance level of posture is clicked in display, and response exceedes the performance level of the click posture of threshold value, amendment virtual objects.The click posture of finger refers to stretching downward or upward of finger and different fingers keeps stretching, extension or curling.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium stores instruction, and described instruction can be performed by processor to perform above-mentioned any method.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, the method for the virtual objects handled in 3D sense space is described.Described method comprises: the click posture of response finger in 3D sense space, select the virtual objects at posture interface, and keeping selecting detecting while described virtual objects finger sensing posture subsequently in 3D sense space, and calculate the force vector pointing to posture.The click posture of finger refers to stretching downward or upward of finger and different fingers keeps stretching, extension or curling.In another embodiment, the distance of crossing when the amplitude of force vector is and makes point to posture based on finger and the speed pointed during making sensing posture.Described method also comprises: when the amplitude of force vector exceedes threshold value, force vector be applied on virtual objects, and revise virtual objects.
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.
In one embodiment, revise virtual objects and comprise the shape changing virtual objects.In another embodiment, revise virtual objects and comprise the position changing virtual objects.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium storage can be performed the instruction of above-mentioned any method by processor.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, the method creating interface element in 3D sense space is described.Described method comprises: the circle of the finger using electronic sensor to detect in 3D sense space sweeps, the transverse direction subsequently detecting the finger in 3D sense space sweeps, and response transverse direction subsequently sweeps, instruction (register) presses screen virtual button, and performs at least one function be associated.The circle of finger sweeps and refers to finger clockwise or counterclockwise movement in free space.In another embodiment, that points when the transversal scanning of finger refers to that the finger tip pointed points to screen control moves forwards or backwards
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.
In one embodiment, the function be associated selects based on the context at described posture interface.In another embodiment, the function be associated selects based on the position of the screen virtual button on posture interface.Described method also comprises: if the transverse direction being not less than threshold percentage sweep motion be Fingers to direction on, be then interpreted as left click mouse by laterally sweeping.Described method also comprises: if it is going up in the opposite direction with Fingers that the transverse direction being not less than threshold percentage sweeps motion, be then interpreted as right-click mouse by laterally sweeping.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium stores instruction, and described instruction can be performed by processor to perform above-mentioned any method.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, the method creating interface element in 3D sense space is described.Described method comprises the both hands detected in 3D sense space and refers to vertically sweep, response both hands refer to vertically sweep at posture interface structure upright slide block, the finger subsequently detected near described upright slide block in described 3D sense space vertically sweeps, vertically sweep rolling upright slide block with response finger, and perform at least one function be associated.Both hands refer to vertically to sweep the motion up or down and other finger of hand is curling in free space of two, finger finger hand.In another embodiment, one finger vertically sweep refer to hand finger in free space up or down motion and other finger of hand is curling.
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.
In one embodiment, the function be associated selects based on the context at described posture interface.In another embodiment, the function be associated selects based on the position of the upright slide block on described posture interface.
In another embodiment, illustrate that the free posture be used in 3D sense space handles the method that gray scale selects widget.Described method comprises selects widget to be associated to screen virtual disk gray scale by following: response screen virtual disks is moved, and amendment gray scale selects the gray-scale value on widget.It comprises: the finger in the 3D sense space that response uses electronic sensor to detect is clicked, and changes the position of described screen virtual disk, and selects widget selects specific gray-scale value in the gray scale of x or the y position corresponding to the disk on screen.Finger gesture refers to that the first finger is relative in the restriction site of second finger, and second finger rapid movement is away from forefinger subsequently.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium stores instruction, and described instruction can be performed by processor to perform the instruction of above-mentioned any method.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, the method for multiple controls at the free posture steering-hold interface be used in 3D sense space is described.Described method comprises selects widget to be associated to screen virtual disk display setting and gray scale by following: response screen virtual disks is moved, the brightness value on amendment display setting widget and the gray-scale value on gray scale selection widget.It comprises: the finger in the 3D sense space that response uses electronic sensor to detect is clicked, and changes the position of described screen virtual disk, and selects specific brightness value and the gray-scale value of x or the y position of the disk corresponded on screen.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium stores instruction, and described instruction can be performed by processor to perform above-mentioned any method.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, describe the method creating interface element in 3D sense space.Described method comprises: the circle of the finger using electronic sensor to detect in 3D sense space sweeps, response circle sweeps, screen virtual disk is built in posture interface, the whirlpool subsequently detecting the finger in 3D sense space sweeps, and response vortex subsequently sweeps, rotating circular disk, and perform at least one function be associated.The circle of finger sweeps and refers to finger clockwise or counterclockwise movement in free space.In another embodiment, the vortex movement of finger refers to point and repeats in free space to make clockwise or counterclockwise movement be combined with the motion up or down of finger.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium stores instruction, and described instruction can be performed by processor to perform above-mentioned any method.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In one embodiment, the method posture of concern and the posture of non-interesting differentiated in three-dimensional (3D) sense space is described.Described method comprises the input receiving the datum characteristic limiting one or more benchmark posture, electronic sensor is used to detect the one or more actual posture in three-dimensional (3D) sense space and use the data determination actual characteristic from electronic sensor, actual posture and benchmark posture are carried out contrasting and determines one group of posture paid close attention to, and one group of posture paid close attention to and corresponding pose parameter are supplied to further process.
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.For brevity, the combination of disclosed feature is not in this application enumerated separately, and does not repeat the feature of each basic group.Reader will understand, and how the feature described in this part can easily combine with the many groups essential characteristic being identified as embodiment.
In one embodiment, when datum characteristic is posture path, the actual posture (as laterally brandished) of straight line path is interpreted as one group of posture paid close attention to.According to a kind of embodiment, when datum characteristic is posture speed, the actual posture with high speed is interpreted as one group of posture paid close attention to.According to a kind of embodiment, when datum characteristic is posture form, use with specific Fingers to the actual posture made of hand be interpreted as one group of posture paid close attention to.When datum characteristic is posture form, the actual posture of the hand of clenching fist according to a kind of embodiment is interpreted as one group of posture paid close attention to.
In another embodiment, when datum characteristic is the shape of posture, the actual posture that hand is thumbed up is interpreted as one group of posture paid close attention to.According to a kind of embodiment, when datum characteristic is posture length, brandishes posture and be interpreted as one group of posture paid close attention to.In another embodiment, when datum characteristic is the position of posture, the actual posture that the distance apart from described electronic sensor is less than threshold value is interpreted as one group of posture paid close attention to.When datum characteristic is the duration of posture, the actual posture of duration threshold time cycle in 3D sense space, but not the actual posture that the time continued in 3D sense space is less than threshold time period is interpreted as one group of posture paid close attention to.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium storage can be performed the instruction of above-mentioned any method by processor.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In another embodiment, the method that specific user customizes posture explanation is illustrated as.Described method comprises: prompting user selects the characteristic value of the posture be used in free space and receives the characteristic value selected, the characteristic boundaries that prompting user performs posture in three-dimensional (3D) sense space concentrates demonstration, from the boundary set of being caught by electronic sensor, one group of parameter of posture is determined in demonstration, and stores this group parameter and corresponding value of being used for gesture recognition.
One or more features that the method for disclosed technology and other embodiment can comprise one or more following characteristics and/or illustrate with other disclosed methods combining.
Described method also comprises the questionnaire used for pointing out user to select the characteristic value of posture.In one embodiment, use questionnaire prompting user selectivity characteristic value to comprise and receive the minimum threshold time period of posture in 3D sense space from user, do not explain posture before this.In another embodiment, perform characteristic boundaries concentrate demonstration comprise user use specific finger make finger to posture as posture form.Perform characteristic boundaries to concentrate demonstration also to comprise user to use hand to make the form of fit as posture.Perform characteristic boundaries concentrate demonstration also comprise user use hand thumb upwards or the downward posture of thumb as the shape of posture.
In one embodiment, perform characteristic boundaries concentrate demonstration comprise user with hand make thumb upwards or the downward posture of thumb as the shape of posture.According to a kind of embodiment, execution characteristic boundaries is concentrated demonstration to comprise user and is made kneading posture to set minimum posture distance as a posture size.In another embodiment, perform characteristic boundaries to concentrate demonstration also to comprise user to make and brandish posture so that arranging maximum posture distance is a posture size.
In another embodiment, perform characteristic boundaries and concentrate demonstration to comprise user to make and point the posture of flicking to arrange the fastest posture movements.In one embodiment, perform characteristic boundaries to concentrate demonstration to comprise user to make and brandish posture to arrange the slowest posture movements.Perform characteristic boundaries to concentrate demonstration to comprise user to make and laterally sweep posture to arrange straight line posture path.According to a kind of embodiment, perform characteristic boundaries and concentrate demonstration to comprise user to make circle and sweep to arrange circular posture path.
Described method also comprises by the explanation of following test to given pose: prompting user performs the complete posture demonstration of given pose at 3D sense space, from one group of parameter of one group of given pose determination given pose of being caught by electronic sensor, this group parameter of given pose and demonstrate from boundary set one group of parameter of the response determined and the characteristic value of selection are contrasted, and to user report contrast result and receive the confirmation whether correct to the explanation of given pose.
Other embodiment can comprise non-transitory computer-readable recording medium, and described storage medium stores instruction, and described instruction can be performed by processor, to perform above-mentioned any method.Another embodiment can comprise the system comprising storer and one or more processor, and described processor can operate to perform storage instruction in memory to perform the instruction of above-mentioned any method.
In yet another aspect, a kind of method identifying that the machine of posture is implemented comprises: prompting input limits one or more characteristics of posture in free space widely, information is sent to machine (no matter whether having surface contact), receive one or more input characteristics, the one group of training parameter limiting posture is determined from received input, at least one example of the described posture of prompting input, determine from least one example of posture the class value of training parameter corresponding to this group, and this class value is supplied to storer is used for identifying posture.Described method can comprise: store a group objects parameter, and a described group objects parameter limits at least one object and is associated with posture, and at least one object described can be presented on contactless display.
Determine that a class value of the training parameter corresponding to this group can comprise from least one example of posture: determine whether at least one one group of trained values described in standardization based on one or more characteristic at least partly, and alternatively, determine whether to ignore at least one (it can comprise the size whether important information indicating described posture) in described one group of trained values based on one or more characteristic at least partly.This group training parameter limiting posture can also comprise at least one parameter of at least one motion limiting posture.At least one example of the described posture of prompting input can comprise the prompting minimum rational motion of input or the motion of prompting input maximum reasonable.
In yet another aspect, disclosed technology relates to non-transitory computer-readable medium, the one or more instruction of described media storage, when executed by one or more processors, described one or more instruction makes described one or more processor perform following steps: prompting input limits one or more characteristics of posture in free space widely, information is sent to machine (no matter whether having surface contact), receive one or more input characteristics, the one group of training parameter limiting posture is determined from received input, at least one example of the described posture of prompting input, a class value of the training parameter corresponding to this group is determined from least one example of posture, and this class value is supplied to storer is used for identifying posture.
In yet another aspect, disclosed technology relates to the method for the dynamic interaction controlling user and device.In representational embodiment, described method comprises the image of multiple Time Continuous of catching user; Described in computational analysis, the image of user identifies the posture of user, and identifies relative scaling, and described scaling indicates the actual posture distance of crossing when assuming a position; Calculate the ratio determined between scaling and the motion of display, the motion of described display corresponds to the action on device to be shown; Based on ratio, action is presented on device, and based on external parameter resize ratio.External parameter can be actual posture distance, or corresponding to the ratio of the pixel distance in the image of catching of the action made and the screen size in units of pixel.
In different embodiments, analyze the image of user to comprise: (i) identifies the position of the one or more human bodies in shape and image, and (ii) is based on the position in the 3 d space of the relation reconstruct human body between the shape of the body part identified in image and position and shape.In one embodiment, the image analyzing user also comprises: in the 3 d space, combine reconstruct position and the shape of described body part according to chronological order.In addition, described method can comprise the 3D model limiting body part and the position and the shape that reconstruct body part based on described 3D model in the 3 d space.
Can identify scaling by the record in posture and gesture data storehouse is carried out contrast, gesture data storehouse can comprise a series of Electronic saving records posture be associated with input parameter separately.In addition, this posture can be stored in record as vector.
In yet another aspect, disclosed technology relates to the system that a kind of user of making and the device with display screen carry out dynamic interaction.In different embodiments, this system comprises towards one or more video cameras of visual field; One or more source, illumination is introduced the user in visual field by described source; Gesture data storehouse, described gesture data storehouse comprises a series of Electronic saving record, and a posture is associated with input parameter by each record; And the image analyzer be connected in video camera and database.In one embodiment, image analyzer is configured to operate the image that multiple Time Continuous of user caught by described video camera, analyze the image of user to identify the posture made by user, and identified posture and gesture data storehouse are recorded to carry out contrasting identify input parameter associated with it; Input parameter corresponds to an action, described action according to the actual posture distance of crossing when assuming a position and corresponding to the display of this action motion between ratio show over the display, image analyzer adjusts this ratio based on external parameter.
Image analyzer can be configured to (i) further and identify that the shape at one or more positions of the human body in user images and position and (ii) are based on the position in the 3 d space of the relation reconstruct human body between the shape of the identification of the body part in image and position and shape.In addition, image analyzer can be configured to limit 3D model, and based on 3D model reconstruction human body position in the 3 d space and shape.In one embodiment, image analyzer is configured to: the track estimating the body part in 3d space.
External parameter can be actual posture distance, or corresponds to the ratio of the pixel distance of made posture in caught image and the screen size in units of pixel.Each posture can have different ratios, and these different ratios are stored in each posture record of database; Or all postures in gesture data storehouse can have identical ratio.
Another aspect of disclosed technology relates to the mutual method of dynamically displaying user and device.In representational embodiment, described method comprises the image that (i) catches multiple Time Continuous of user, described in (II) computational analysis, the image of user identifies the posture of user, record in identified posture and gesture data storehouse is carried out the posture contrasting to identify by (III), (IV) calculates the performance level determining identified posture, and (v) is according to the displaying contents of the performance level modifier determined.Content can comprise icon, bar, color gradient or chroma-luminance.
In different embodiments, described method comprises repetitive operation (I)-(V), until performance level exceedes predetermined threshold, then makes described device carry out trigger action.In one embodiment, the image analyzing user comprises: the shape and the position that identify one or more positions of human body in user images.In some embodiments, described method also comprises: according to physical simulation model and based on posture performance level display response posture action.Content can comprise icon, bar, color gradient or chroma-luminance.
In yet another aspect, disclosed technology relates to a kind of user and the dynamic interaction system of device with display screen.In some embodiments, this system comprises towards one or more video cameras of visual field; One or more source, illumination is introduced the user in visual field by described source (such as light source and/or sound source); Gesture data storehouse, described gesture data storehouse comprises a series of Electronic saving record, and a posture specified in each record; And be connected to the image analyzer of video camera.In one embodiment, image analyzer is configured to operate the image that multiple Time Continuous of user caught by described video camera, analyze the image of user to identify the posture of user, and the record in identified posture and gesture data storehouse is carried out contrast to identify posture, determine the performance level of identified posture, and on the screen of device, show the determined performance level of reaction.Indicator can comprise icon, bar, color gradient or chroma-luminance.
In different embodiments, image analyzer is configured to determine whether performance level exceedes predetermined threshold, if performance level exceedes predetermined threshold, then makes this device carry out triggering the action of (completion-triggered).Image analyzer can be configured to further according to physical simulation model and respond the action of posture based on the display of posture performance level.Shown action can further based on motion model.
Another aspect of disclosed technology relates to a kind of method controlling the dynamic interaction of user and device.In representational embodiment, described method comprises the image of multiple Time Continuous of catching user, the image of computational analysis user, to identify the posture of multiple user, calculates and determines leading posture, and action be presented on device based on leading posture.
Leading posture is determined by filtering described multiple posture.In one embodiment, filtration is carried out iteratively.In addition, each described posture can be represented as a track.In some embodiments, each track can be represented as the vector along the Euler's degree of freedom of six in Euler space, and the example with amplitude peak is confirmed as leading posture.
In different embodiments, the image analyzing described user comprises: (i) identifies that the shape at one or more positions of the human body in the image of user and position and (ii) are based on the position in the 3 d space of the relation reconstruct human body between the shape of the body part of the identification in image and position and shape.In one embodiment, method also comprises and limits the 3D model of body part, and based on 3D model reconstruction human body position in the 3 d space and shape.The image analyzing user comprises the reconstruct position and the shape that combine described body part in the 3 d space according to chronological order.
In yet another aspect, disclosed technology relates to a kind of system controlling the dynamic interaction of user and device.In different embodiments, this system comprises towards one or more video cameras of visual field; One or more source, described source (such as light source and/or sound source) by illuminated guidance to the user in visual field; Gesture data storehouse, described gesture data storehouse comprises a series of Electronic saving record, and a posture specified in each record; And be connected to the image analyzer of video camera and database.In one embodiment, image analyzer is configured to operate the image that multiple Time Continuous of user caught by described video camera, and the image analyzing user, to identify multiple user's posture, is determined leading posture, and action is presented on device based on leading posture.
Image analyzer can be further configured to determines leading posture by filtering described multiple posture.In one embodiment, filtration is carried out iteratively.In addition, image analyzer can be configured to each posture to be represented as a track.Each track can be represented as the vector along the Euler's degree of freedom of six in Euler space, and the example with maximum amplitude is confirmed as leading posture.
In one aspect, the method controlling the dynamic interaction of user and device comprises: the image of catching multiple Time Continuous of user, the subset of the image of user described in computational analysis, to identify the posture contacted with the position of screen virtual disk of user, another subset of the image of user described in computational analysis, to identify that the screen virtual disk that makes of user moves to the posture of reposition, according to the parameter of the reposition amendment software application of disk.
Disk can be circular, square or triangle; Parameter can be color, and sliding disc can change color.Identify that the posture of the slip screen virtual disk of user can comprise: posture exceeded threshold distance before amendment parameter.Disk can continue motion a period of time after user's posture stops contact disk.Disk can to rebound fixed position after user's posture stops contact disk.The subset of the image of user can by computational analysis, to identify the posture of the order of the establishment user interface elements of user.Posture can be circular motion, two finger transverse movement, user finger forward or counter motion, the motion of user interface elements can be button respectively, slide block, or mouse click.
In yet another aspect, the system making user can carry out dynamic interaction with the device with display screen comprises: towards the video camera of visual field; Source, described source by illuminated guidance to the user in visual field; Gesture data storehouse, described gesture data storehouse comprises a series of Electronic saving record, and posture is associated with input parameter by each record.Image analyzer is connected to video camera, be configured to the image of multiple Time Continuous of catching user, the subset of the image of user described in computational analysis, to identify the posture contacted with the position of screen virtual disk of user, another subset of the image of user described in computational analysis, to identify that the screen virtual disk that makes of user moves to the posture of reposition, according to the parameter of the reposition amendment software application of disk.
Disk can be circular, square or triangle; Parameter can be color, and sliding disc can change color.Identify that the posture of the slip screen virtual disk of user can comprise: posture exceeded threshold distance before amendment parameter.Disk can continue motion a period of time after user's posture stops contact disk.Disk can to rebound fixed position after user's posture stops contact disk.The subset of the image of user can by computational analysis, to identify the posture of the order of the establishment user interface elements of user.Posture can be circular motion, two finger transverse movement, user finger forward or counter motion, the motion of user interface elements can be button respectively, slide block, or mouse click.
In whole instructions, " embodiment ", " example ", quoting of " embodiment " or " embodiment " are meaned that being combined with example the special characteristic, structure or the characteristic that illustrate is included at least one example of this technology.Therefore, the phrase " in one example " occurred in the different place of whole instructions, " in this example ", " embodiment " or " enforcement " differing to establish a capital refers to same example.In addition, specific feature, structure, program, action or characteristic can be combined in one or more examples of this technology by any way.Here the title provided, only for convenient, is not intended to the scope or the implication that limit or explain right technology.

Claims (54)

1., by the method that the significant posture in three-dimensional (3D) sense space and close insignificant posture differentiate, described method comprises:
When arm is when moving, bending and total track of arm posture of wrist and finger being differentiated, comprising:
Electronic sensor is used to detect described arm in three-dimensional (3D) sense space and the wrist of attachment and the position of finger;
From the space tracking brandishing posture that arm described in the position calculation of a series of detection is made;
From the space tracking of the curved position of wrist and/or finger described in the position calculation detected; And
Posture is brandished or described curved position is leading posture described in determining based on the amplitude of corresponding space tracking; And
Trigger the response to described leading posture, and do not trigger the response to non-dominant posture.
2. method according to claim 1, wherein, described in brandish the space tracking of posture the amplitude distance of crossing when brandishing posture at least partly described in making determine.
3. method according to claim 1, wherein, the amplitude of the space tracking of described curved position is determined by the curling degree of described finger at least in part.
4. differentiation stems from a method for two postures simultaneously made of the single object in three-dimensional (3D) sense space, and described method comprises:
When arm motion, total track of wrist and digital flexion and arm posture is differentiated, comprising:
Electronic sensor is used to detect described arm in three-dimensional (3D) sense space and the wrist of attachment and the position of finger;
From the space tracking brandishing posture that arm described in the position calculation of a series of detection is made, wherein, the distance of crossing when the amplitude of described space tracking brandishes posture at least partly described in making is determined;
From the space tracking of the curved position of wrist and/or finger described in the position calculation detected, wherein, the amplitude of described space tracking is determined by the curling degree of described finger at least in part; And
Assess the amplitude of each space tracking and the amplitude based on described space tracking determines leading posture; And
The response to total track is triggered according to described leading posture.
5. do not consider a method for the posture input of the position consistency ground response user of user in three-dimensional (3D) sense space, described method comprises:
The scaling of the response caused in the posture of automatic adjustment in physical space and posture interface, comprising:
Calculate control object is attached to the video camera at described posture interface distance apart from electronics;
Visual angle bi-directional scaling motion in camera coverage crossed apart from the distance of video camera based on control object is to the move distance of bi-directional scaling; And
The move distance of automatic adjustment response and the bi-directional scaling of the posture in reflection physical space instead of the ratio at visual angle of crossing.
6. method according to claim 5, described method also comprises: when the visual angle of crossing is lower than threshold value, reduces the screen virtual response at posture interface.
7. method according to claim 5, described method also comprises: when the visual angle of crossing is higher than threshold value, increases the screen virtual response at posture interface.
8. a method for the response of the virtual objects at the posture interface in three-dimensional (3D) sense space of adjustment, described method comprises:
Posture in automatic adjustment physical space and the response ratio between the response of the virtual objects caused in posture interface, comprising:
Quantity based on described virtual objects calculates the density of the virtual objects at posture interface; And
The density of the virtual objects in response posture interface, automatically the screen virtual response of adjustment virtual objects and the ratio of posture.
9. method according to claim 8, described method also comprises: when content density is higher than threshold value, response given pose, automatically specifies the low screen virtual response of virtual objects.
10. method according to claim 9, described method also comprises: when content density is lower than threshold value, response given pose, automatically specifies the high screen virtual response of virtual objects.
11. 1 kinds of as one man responses carry out the method for the posture input of the multiple users in comfortable three-dimensional (3D) sense space, and described method comprises:
By following automatic adjustment from the posture in the physical space of multiple user and the response ratio between the response caused in the posture interface of sharing:
Based on the user interval in interval calculation three-dimensional (3D) sense space of the user detected at 3D sense space; And
When explaining the move distance of the posture in physical space, response user interval, adjusts the ratio of the screen virtual response at the posture interface of sharing automatically.
Whether 12. 1 kinds detected user and intend to carry out mutual method with the virtual objects in three-dimensional (3D) sense space, and described method comprises:
Electronic sensor is used to detect the click posture of the finger in three-dimensional (3D) sense space; And
Determine whether click posture to be interpreted as carrying out with the virtual objects in 3D sense space alternately, comprising according to the performance level clicking posture:
Calculate finger and make the distance of crossing when clicking posture;
Access gesture data storehouse is to determine that the calculated posture corresponding to the distance clicking posture completes value; And
Be respond the posture exceeding threshold value to complete value manipulation virtual objects by click recognition.
13. methods according to claim 12, wherein, gesture data storehouse comprises the track of different postures and corresponding posture completes value.
14. methods according to claim 12, described method also comprises by being carried out contrasting the performance level calculating and click posture with at least one space tracking be stored in gesture data storehouse by the space tracking clicking posture.
15. methods according to claim 12, described method also comprises measures by following the performance level clicking posture:
The interface element representing virtual controlling is associated to by making click posture; And
The real time modifying interface element when making click posture.
16. methods according to claim 15, wherein, described interface element is hollow circular icon, and real time modifying icon comprise response click posture fill described circular icon gradually.
Whether 17. 1 kinds detected user and intend to carry out mutual method with the virtual objects in three-dimensional (3D) sense space, and described method comprises:
Electronic sensor is used to detect the click posture of the finger in three-dimensional (3D) sense space;
Response detects clicks posture, activates the screen virtual indicator that the performance level of posture is clicked in display; And
Response exceedes the performance level of the click posture of threshold value, amendment virtual objects.
The method institute of the virtual objects of 18. 1 kinds of manipulations in three-dimensional (3D) sense space, described method comprises:
The click posture of response finger in three-dimensional (3D) sense space, selects the virtual objects at posture interface;
Keep selecting detecting while described virtual objects finger sensing posture subsequently in 3D sense space, and calculate the force vector pointing to posture, wherein, the amplitude of described force vector be based on:
Finger makes the distance of crossing when pointing to posture; And
The speed pointed during making sensing posture; And
When the amplitude of force vector exceedes threshold value, force vector is applied on virtual objects, and changes virtual objects described in repairing.
19. methods according to claim 18, wherein, amendment virtual objects comprises the shape changing virtual objects.
20. methods according to claim 18, wherein, amendment virtual objects comprises the position changing virtual objects.
21. 1 kinds of methods creating interface element in three-dimensional (3D) sense space, described method comprises:
The circle of the finger using electronic sensor to detect in three-dimensional (3D) sense space sweeps;
The transverse direction subsequently detecting the finger in three-dimensional (3D) sense space sweeps;
Response transverse direction subsequently sweeps, registration pressing screen virtual button, and performs at least one function be associated.
22. methods according to claim 21, wherein, the function be associated selects based on the context at described posture interface.
23. methods according to claim 21, wherein, the function be associated selects based on the position of the screen virtual button on posture interface.
24. methods according to claim 21, described method also comprises: if the transverse direction being not less than threshold percentage sweep motion be Fingers to direction on, be then interpreted as left click mouse by laterally sweeping.
25. methods according to claim 21, described method also comprises: if it is going up in the opposite direction with Fingers that the transverse direction being not less than threshold percentage sweeps motion, be then interpreted as right-click mouse by laterally sweeping.
26. 1 kinds of methods creating interface element in three-dimensional (3D) sense space, described method comprises:
The both hands detected in three-dimensional (3D) sense space refer to vertically sweep;
Response both hands refer to vertically sweep, and build upright slide block at posture interface;
The finger subsequently detected near described upright slide block in described 3D sense space vertically sweeps; And
Response finger vertically sweeps, rolling upright slide block, and performs at least one function be associated.
27. methods according to claim 26, wherein, the function be associated is selected in the position based on the upright slide block on described posture interface.
28. 1 kinds of free postures be used in three-dimensional (3D) sense space handle the method that gray scale selects widget, and described method comprises:
The motion of response screen virtual disks, select the gray-scale value on widget to select widget to be associated to screen virtual disk gray scale by amendment gray scale, comprising:
Finger in three-dimensional (3D) sense space that response uses electronic sensor to detect is clicked, and changes the position of described screen virtual disk; And
Select widget selects specific gray-scale value in the gray scale of x or the y position corresponding to the disk on screen.
29. 1 kinds of methods being used in multiple controls at the free posture steering-hold interface in three-dimensional (3D) sense space, described method comprises:
Widget is selected to be associated to screen virtual disk display setting and gray scale by following:
The motion of response screen virtual disks, the brightness value on amendment display setting widget and gray scale select the gray-scale value on widget, comprising:
Finger in three-dimensional (3D) sense space that response uses electronic sensor to detect is clicked, and changes the position of described screen virtual disk; And
Select specific brightness value and the gray-scale value of x or the y position of the disk corresponded on screen.
30. 1 kinds of methods creating interface element in three-dimensional (3D) sense space, described method comprises:
The circle of the finger using electronic sensor to detect in three-dimensional (3D) sense space sweeps;
Response circle sweeps, and builds screen virtual disk in posture interface;
Detect the whirlpool subsequently pointed in 3D sense space to sweep; And
Response vortex subsequently sweeps, rotating circular disk, and performs at least one function be associated.
31. 1 kinds of methods in three-dimensional (3D) sense space, the posture of concern and the posture of non-interesting differentiated, described method comprises:
Receive the input of the datum characteristic limiting one or more benchmark posture;
Electronic sensor is used to detect the one or more actual posture in three-dimensional (3D) sense space and use the data determination actual characteristic from electronic sensor;
Actual posture and benchmark posture are carried out contrasting and determines one group of posture paid close attention to, and one group of posture paid close attention to and corresponding pose parameter are supplied to further process.
32. methods according to claim 31, wherein, datum characteristic is posture path, and described method also comprises the posture actual posture of straight line path being interpreted as one group of concern.
33. methods according to claim 32, wherein, described method also comprises and will laterally sweep the posture being interpreted as one group of concern.
34. methods according to claim 31, wherein, datum characteristic is posture speed, and described method also comprises the posture actual posture with high speed being interpreted as one group of concern.
35. methods according to claim 31, wherein, datum characteristic is posture form, described method also comprise by use with specific Fingers to the actual posture made of hand be interpreted as one group of posture paid close attention to.
36. methods according to claim 31, wherein, datum characteristic is posture form, and described method also comprises the posture actual posture of the hand of clenching fist being interpreted as one group of concern.
37. methods according to claim 31, wherein, when datum characteristic is the shape of posture, the actual posture that hand is thumbed up is interpreted as one group of posture paid close attention to.
38. methods according to claim 31, wherein, when datum characteristic is posture length, brandishes posture and are interpreted as one group of posture paid close attention to.
39. methods according to claim 31, wherein, when datum characteristic is the position of posture, the actual posture that the distance apart from described electronic sensor is less than threshold value is interpreted as one group of posture paid close attention to.
40. methods according to claim 31, wherein, when datum characteristic is the duration of posture, the actual posture of duration threshold time cycle in 3D sense space, but not the actual posture that the time continued in 3D sense space is less than threshold time period is interpreted as one group of posture paid close attention to.
41. 1 kinds for specific user customize posture explain method, described method comprises:
Prompting user selects the characteristic value of the posture be used in free space and receives the characteristic value selected;
The characteristic boundaries that prompting user performs posture in three-dimensional (3D) sense space concentrates demonstration;
From the boundary set of being caught by electronic sensor, one group of parameter of posture is determined in demonstration; And
Store this group parameter and corresponding value of being used for gesture recognition.
42. methods according to claim 41, wherein, described method also comprises the questionnaire used for pointing out user to select the characteristic value of posture.
43. methods according to claim 41, wherein, use questionnaire prompting user selectivity characteristic value to comprise: to receive the minimum threshold time period of posture in 3D sense space from user, do not explain posture before this.
44. methods according to claim 41, wherein, perform characteristic boundaries concentrate demonstration comprise user use specific finger make finger to posture as posture form.
45. methods according to claim 41, wherein, perform characteristic boundaries and concentrate demonstration to comprise user and use hand to make the form of fit as posture.
46. methods according to claim 41, wherein, perform characteristic boundaries concentrate demonstration comprise user use hand thumb upwards or the downward posture of thumb as the shape of posture.
47. methods according to claim 41, wherein, perform characteristic boundaries concentrate demonstration comprise user with hand make thumb upwards or the downward posture of thumb as the shape of posture.
48. methods according to claim 41, wherein, execution characteristic boundaries is concentrated demonstration to comprise user and is made kneading posture to set minimum posture distance as a posture size.
49. methods according to claim 41, wherein, perform characteristic boundaries and concentrate demonstration also to comprise user to make and brandish posture so that arranging maximum posture distance is a posture size.
50. methods according to claim 41, wherein, perform characteristic boundaries and concentrate demonstration to comprise user to make and point the posture of flicking to arrange the fastest posture movements.
51. methods according to claim 41, wherein, perform characteristic boundaries and concentrate demonstration to comprise user to make and brandish posture to arrange the slowest posture movements.
52. methods according to claim 41, wherein, perform characteristic boundaries and concentrate demonstration to comprise user to make and laterally sweep posture to arrange straight line posture path.
53. methods according to claim 41, wherein, perform characteristic boundaries and concentrate demonstration to comprise user to make circle and sweep to arrange circular posture path.
54. methods according to claim 41, described method also comprises:
Prompting user performs the complete posture demonstration of given pose at 3D sense space;
From one group of parameter of one group of given pose determination given pose of being caught by electronic sensor;
This group parameter of given pose and demonstrate from boundary set one group of parameter of the response determined and the characteristic value of selection are contrasted; And
To the result of user report contrast, and receive the confirmation whether correct to the explanation of given pose.
CN201480014375.1A 2013-01-15 2014-01-15 Dynamic user interactions for display control and customized gesture interpretation Pending CN105308536A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110836174.1A CN113568506A (en) 2013-01-15 2014-01-15 Dynamic user interaction for display control and customized gesture interpretation

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
US201361752733P 2013-01-15 2013-01-15
US201361752731P 2013-01-15 2013-01-15
US201361752725P 2013-01-15 2013-01-15
US61/752,731 2013-01-15
US61/752,733 2013-01-15
US61/752,725 2013-01-15
US201361791204P 2013-03-15 2013-03-15
US61/791,204 2013-03-15
US201361808984P 2013-04-05 2013-04-05
US201361808959P 2013-04-05 2013-04-05
US61/808,959 2013-04-05
US61/808,984 2013-04-08
US201361872538P 2013-08-30 2013-08-30
US61/872,538 2013-08-30
PCT/US2014/011737 WO2014113507A1 (en) 2013-01-15 2014-01-15 Dynamic user interactions for display control and customized gesture interpretation

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110836174.1A Division CN113568506A (en) 2013-01-15 2014-01-15 Dynamic user interaction for display control and customized gesture interpretation

Publications (1)

Publication Number Publication Date
CN105308536A true CN105308536A (en) 2016-02-03

Family

ID=51210870

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201480014375.1A Pending CN105308536A (en) 2013-01-15 2014-01-15 Dynamic user interactions for display control and customized gesture interpretation
CN202110836174.1A Pending CN113568506A (en) 2013-01-15 2014-01-15 Dynamic user interaction for display control and customized gesture interpretation

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110836174.1A Pending CN113568506A (en) 2013-01-15 2014-01-15 Dynamic user interaction for display control and customized gesture interpretation

Country Status (3)

Country Link
CN (2) CN105308536A (en)
DE (1) DE112014000441T5 (en)
WO (1) WO2014113507A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825325A (en) * 2016-03-10 2016-08-03 南京市建筑安装工程质量监督站 Project quality supervision personnel supervision capability evaluation method and device
US9632658B2 (en) 2013-01-15 2017-04-25 Leap Motion, Inc. Dynamic user interactions for display control and scaling responsiveness of display objects
CN106598240A (en) * 2016-12-06 2017-04-26 北京邮电大学 Menu item selection method and device
CN106981101A (en) * 2017-03-09 2017-07-25 衢州学院 A kind of control system and its implementation for realizing three-dimensional panorama roaming
US9747696B2 (en) 2013-05-17 2017-08-29 Leap Motion, Inc. Systems and methods for providing normalized parameters of motions of objects in three-dimensional space
CN107370936A (en) * 2016-05-12 2017-11-21 青岛海信宽带多媒体技术有限公司 Zooming method and zoom lens control device
CN108521604A (en) * 2018-03-30 2018-09-11 新华三云计算技术有限公司 Redirect the multi-display method and device of video
CN108604122A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 The method and apparatus that prediction action is used in reality environment
CN110199251A (en) * 2017-02-02 2019-09-03 麦克赛尔株式会社 Display device and remote operation control device
CN110582741A (en) * 2017-03-21 2019-12-17 Pcms控股公司 Method and system for haptic interaction detection and augmentation in augmented reality
US10620709B2 (en) 2013-04-05 2020-04-14 Ultrahaptics IP Two Limited Customized gesture interpretation
CN111746401A (en) * 2020-06-29 2020-10-09 广州小鹏车联网科技有限公司 Interaction method based on three-dimensional parking and vehicle
CN113223344A (en) * 2021-05-25 2021-08-06 湖南汽车工程职业学院 Big data-based professional teaching display system for art design
CN113791685A (en) * 2021-08-16 2021-12-14 青岛海尔科技有限公司 Method and device for moving component, electronic equipment and storage medium
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US11875012B2 (en) 2018-05-25 2024-01-16 Ultrahaptics IP Two Limited Throwable interface for augmented reality and virtual reality environments

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9141198B2 (en) 2013-01-08 2015-09-22 Infineon Technologies Ag Control of a control parameter by gesture recognition
US9978202B2 (en) 2014-02-14 2018-05-22 Igt Canada Solutions Ulc Wagering gaming apparatus for detecting user interaction with game components in a three-dimensional display
US9799159B2 (en) 2014-02-14 2017-10-24 Igt Canada Solutions Ulc Object detection and interaction for gaming systems
US9558610B2 (en) 2014-02-14 2017-01-31 Igt Canada Solutions Ulc Gesture input interface for gaming systems
US10290176B2 (en) 2014-02-14 2019-05-14 Igt Continuous gesture recognition for gaming systems
CN104200491A (en) * 2014-08-15 2014-12-10 浙江省新华医院 Motion posture correcting system for human body
US20160062473A1 (en) * 2014-08-29 2016-03-03 Hand Held Products, Inc. Gesture-controlled computer system
WO2016205918A1 (en) * 2015-06-22 2016-12-29 Igt Canada Solutions Ulc Object detection and interaction for gaming systems
AU2015405544B2 (en) * 2015-08-07 2021-12-16 Igt Canada Solutions Ulc Three-dimensional display interaction for gaming systems
US9996164B2 (en) 2016-09-22 2018-06-12 Qualcomm Incorporated Systems and methods for recording custom gesture commands
CN107977071B (en) * 2016-10-24 2020-02-28 中国移动通信有限公司研究院 Operation method and device suitable for space system
CN106681516B (en) * 2017-02-27 2024-02-06 盛世光影(北京)科技有限公司 Natural man-machine interaction system based on virtual reality
CN109656432B (en) * 2017-10-10 2022-09-13 腾讯科技(深圳)有限公司 Control method, device, equipment and storage medium in virtual reality environment
KR102627014B1 (en) * 2018-10-02 2024-01-19 삼성전자 주식회사 electronic device and method for recognizing gestures
CN110134236B (en) * 2019-04-28 2022-07-05 陕西六道文化科技有限公司 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision
WO2020251385A1 (en) * 2019-06-14 2020-12-17 Ringcentral, Inc., (A Delaware Corporation) System and method for capturing presentation gestures
CN114286142B (en) * 2021-01-18 2023-03-28 海信视像科技股份有限公司 Virtual reality equipment and VR scene screen capturing method
DE102021132261A1 (en) * 2021-12-08 2023-06-15 Schneider Electric Industries Sas Arrangement for contactless operation of an electrical device, optically detectable tag, optical detection device and processing device for use in such an arrangement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930286A (en) * 2009-06-22 2010-12-29 索尼公司 Operating control device, method of controlling operation thereof and computer-readable recording medium
CN102117117A (en) * 2010-01-06 2011-07-06 致伸科技股份有限公司 System and method for control through identifying user posture by image extraction device
CN102216883A (en) * 2008-11-12 2011-10-12 苹果公司 Generating gestures tailored to a hand resting on a surface
CN102262438A (en) * 2010-05-18 2011-11-30 微软公司 Gestures and gesture recognition for manipulating a user-interface
CN102402290A (en) * 2011-12-07 2012-04-04 北京盈胜泰科技术有限公司 Method and system for identifying posture of body
CN102439538A (en) * 2009-12-29 2012-05-02 摩托罗拉移动公司 Electronic device with sensing assembly and method for interpreting offset gestures
US20120105613A1 (en) * 2010-11-01 2012-05-03 Robert Bosch Gmbh Robust video-based handwriting and gesture recognition for in-car applications

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6720949B1 (en) * 1997-08-22 2004-04-13 Timothy R. Pryor Man machine interfaces and applications
DE602004006190T8 (en) * 2003-03-31 2008-04-10 Honda Motor Co., Ltd. Device, method and program for gesture recognition
US20120204133A1 (en) * 2009-01-13 2012-08-09 Primesense Ltd. Gesture-Based User Interface
US8289162B2 (en) * 2008-12-22 2012-10-16 Wimm Labs, Inc. Gesture-based user interface for a wearable portable device
US8232990B2 (en) * 2010-01-05 2012-07-31 Apple Inc. Working with 3D objects
US8631355B2 (en) * 2010-01-08 2014-01-14 Microsoft Corporation Assigning gesture dictionaries
US8659658B2 (en) * 2010-02-09 2014-02-25 Microsoft Corporation Physical interaction zone for gesture-based user interfaces
US8457353B2 (en) * 2010-05-18 2013-06-04 Microsoft Corporation Gestures and gesture modifiers for manipulating a user-interface
US20110314427A1 (en) * 2010-06-18 2011-12-22 Samsung Electronics Co., Ltd. Personalization using custom gestures
US8842084B2 (en) * 2010-09-08 2014-09-23 Telefonaktiebolaget L M Ericsson (Publ) Gesture-based object manipulation methods and devices
US20120223959A1 (en) * 2011-03-01 2012-09-06 Apple Inc. System and method for a touchscreen slider with toggle control
CN102135796B (en) * 2011-03-11 2013-11-06 钱力 Interaction method and interaction equipment
US9086794B2 (en) * 2011-07-14 2015-07-21 Microsoft Technology Licensing, Llc Determining gestures on context based menus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216883A (en) * 2008-11-12 2011-10-12 苹果公司 Generating gestures tailored to a hand resting on a surface
CN101930286A (en) * 2009-06-22 2010-12-29 索尼公司 Operating control device, method of controlling operation thereof and computer-readable recording medium
CN102439538A (en) * 2009-12-29 2012-05-02 摩托罗拉移动公司 Electronic device with sensing assembly and method for interpreting offset gestures
CN102117117A (en) * 2010-01-06 2011-07-06 致伸科技股份有限公司 System and method for control through identifying user posture by image extraction device
CN102262438A (en) * 2010-05-18 2011-11-30 微软公司 Gestures and gesture recognition for manipulating a user-interface
US20120105613A1 (en) * 2010-11-01 2012-05-03 Robert Bosch Gmbh Robust video-based handwriting and gesture recognition for in-car applications
CN102402290A (en) * 2011-12-07 2012-04-04 北京盈胜泰科技术有限公司 Method and system for identifying posture of body

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US11269481B2 (en) 2013-01-15 2022-03-08 Ultrahaptics IP Two Limited Dynamic user interactions for display control and measuring degree of completeness of user gestures
US10241639B2 (en) 2013-01-15 2019-03-26 Leap Motion, Inc. Dynamic user interactions for display control and manipulation of display objects
US10564799B2 (en) 2013-01-15 2020-02-18 Ultrahaptics IP Two Limited Dynamic user interactions for display control and identifying dominant gestures
US9696867B2 (en) 2013-01-15 2017-07-04 Leap Motion, Inc. Dynamic user interactions for display control and identifying dominant gestures
US10782847B2 (en) 2013-01-15 2020-09-22 Ultrahaptics IP Two Limited Dynamic user interactions for display control and scaling responsiveness of display objects
US10817130B2 (en) 2013-01-15 2020-10-27 Ultrahaptics IP Two Limited Dynamic user interactions for display control and measuring degree of completeness of user gestures
US9632658B2 (en) 2013-01-15 2017-04-25 Leap Motion, Inc. Dynamic user interactions for display control and scaling responsiveness of display objects
US10042510B2 (en) 2013-01-15 2018-08-07 Leap Motion, Inc. Dynamic user interactions for display control and measuring degree of completeness of user gestures
US10620709B2 (en) 2013-04-05 2020-04-14 Ultrahaptics IP Two Limited Customized gesture interpretation
US11347317B2 (en) 2013-04-05 2022-05-31 Ultrahaptics IP Two Limited Customized gesture interpretation
US9747696B2 (en) 2013-05-17 2017-08-29 Leap Motion, Inc. Systems and methods for providing normalized parameters of motions of objects in three-dimensional space
CN105825325A (en) * 2016-03-10 2016-08-03 南京市建筑安装工程质量监督站 Project quality supervision personnel supervision capability evaluation method and device
CN105825325B (en) * 2016-03-10 2017-02-08 南京市建筑安装工程质量监督站 Project quality supervision personnel supervision capability evaluation method and device
CN108604122A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 The method and apparatus that prediction action is used in reality environment
CN107370936A (en) * 2016-05-12 2017-11-21 青岛海信宽带多媒体技术有限公司 Zooming method and zoom lens control device
CN106598240B (en) * 2016-12-06 2020-02-18 北京邮电大学 Menu item selection method and device
CN106598240A (en) * 2016-12-06 2017-04-26 北京邮电大学 Menu item selection method and device
CN110199251A (en) * 2017-02-02 2019-09-03 麦克赛尔株式会社 Display device and remote operation control device
CN106981101A (en) * 2017-03-09 2017-07-25 衢州学院 A kind of control system and its implementation for realizing three-dimensional panorama roaming
CN110582741A (en) * 2017-03-21 2019-12-17 Pcms控股公司 Method and system for haptic interaction detection and augmentation in augmented reality
CN110582741B (en) * 2017-03-21 2024-04-02 交互数字Vc控股公司 Method and system for haptic interaction detection and augmentation in augmented reality
CN108521604B (en) * 2018-03-30 2020-12-08 新华三云计算技术有限公司 Multi-screen display method and device for redirecting video
CN108521604A (en) * 2018-03-30 2018-09-11 新华三云计算技术有限公司 Redirect the multi-display method and device of video
US11875012B2 (en) 2018-05-25 2024-01-16 Ultrahaptics IP Two Limited Throwable interface for augmented reality and virtual reality environments
CN111746401A (en) * 2020-06-29 2020-10-09 广州小鹏车联网科技有限公司 Interaction method based on three-dimensional parking and vehicle
CN113223344A (en) * 2021-05-25 2021-08-06 湖南汽车工程职业学院 Big data-based professional teaching display system for art design
CN113791685A (en) * 2021-08-16 2021-12-14 青岛海尔科技有限公司 Method and device for moving component, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2014113507A1 (en) 2014-07-24
CN113568506A (en) 2021-10-29
DE112014000441T5 (en) 2015-10-15

Similar Documents

Publication Publication Date Title
CN105308536A (en) Dynamic user interactions for display control and customized gesture interpretation
US11269481B2 (en) Dynamic user interactions for display control and measuring degree of completeness of user gestures
US11740705B2 (en) Method and system for controlling a machine according to a characteristic of a control object
US11567578B2 (en) Systems and methods of free-space gestural interaction
US11181985B2 (en) Dynamic user interactions for display control
CN104246682B (en) Enhanced virtual touchpad and touch-screen
CN105683882B (en) Waiting time measurement and test macro and method
WO2014113454A1 (en) Dynamic, free-space user interactions for machine control
CN101810003B (en) Enhanced camera-based input
US20200004403A1 (en) Interaction strength using virtual objects for machine control
CN104541232B (en) Multi-modal touch-screen emulator
Alavi A Framework for Optimal In-Air Gesture Recognition in Collaborative Environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200709

Address after: English: fusuri

Applicant after: ULTRAHAPTICS IP LTD.

Address before: California, USA

Applicant before: LMI Clearing Co.,Ltd.

Effective date of registration: 20200709

Address after: California, USA

Applicant after: LMI Clearing Co.,Ltd.

Address before: California, USA

Applicant before: LEAP MOTION, Inc.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20160203

RJ01 Rejection of invention patent application after publication