CN104067201A - Gesture input with multiple views, displays and physics - Google Patents

Gesture input with multiple views, displays and physics Download PDF

Info

Publication number
CN104067201A
CN104067201A CN201180076283.2A CN201180076283A CN104067201A CN 104067201 A CN104067201 A CN 104067201A CN 201180076283 A CN201180076283 A CN 201180076283A CN 104067201 A CN104067201 A CN 104067201A
Authority
CN
China
Prior art keywords
display
user
posture
viewing area
virtual objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201180076283.2A
Other languages
Chinese (zh)
Other versions
CN104067201B (en
Inventor
G·安德森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to CN201511009413.7A priority Critical patent/CN105653031B/en
Publication of CN104067201A publication Critical patent/CN104067201A/en
Application granted granted Critical
Publication of CN104067201B publication Critical patent/CN104067201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Gesture input with multiple displays, views, and physics is described. In one example, a method includes generating a three dimensional space having a plurality of objects in different positions relative to a user and a virtual object to be manipulated by the user, presenting, on a display, a displayed area having at least a portion of the plurality of different objects, detecting an air gesture of the user against the virtual object, the virtual object being outside the displayed area, generating a trajectory of the virtual object in the three-dimensional space based on the air gesture, the trajectory including interactions with objects of the plurality of objects in the three-dimensional space, and presenting a portion of the generated trajectory on the displayed area.

Description

With a plurality of views, display and the input of physical posture
Field
This description relates to user input method and the demonstration in computer system, relates in particular in a plurality of displays or three-dimensional display system and represents user's posture.
Background
Developed computer system input and comprised that suspension posture (air gesture) and touch-screen posture are as input.Suspension posture can relate to user and moves their health and allow corresponding action occur on display or corresponding order is carried out by computing system.A kind of form of current suspension posture technology is used removable sensor as controller or as game console.Sensor is held in hand, be attached to health, or handled (in the Wii telepilot in Nintendo Co. by other positions of hand, pin or health, in the PlayStation Move of Sony company (moving game station), and various smart phones, and in handheld games equipment).Another form of suspension posture technology is used 3D camera and microphone techniques (such as in the Microsoft Kinect in Microsoft and in the PlayStation Eye of Sony company (eye of game station)), human motion is approximately to mode input source.
Televisor, computing machine, and portable device display is for checking the typical feedback mechanism of suspension posture mode input on the impact of graphics environment.Camera is collected the video input for posture detection, and utilizes the software moving on game console or personal computer to explain video input.Camera array can allow the camera induction degree of depth.This provides identification people's health with respect to the position of camera or the ability of distance.Camera array allows towards mobile camera moving and away from the additional suspension posture of mobile camera moving.
As the posture of another form, the screen of game console and the computing machine such as desktop computer, notebook, flat board and smart phone combines the touch screen technology responding to touching input.Touch on display screen and scan posture and be for example used as, for carrying out user's input of order object for example, is moved to another screen (, TV screen) from a screen (, hand-held control desk screen).For example, when using PlayStation Portable (portable game station) game console together with PlayStation3 (game station 3) control desk, realized such feature, both by Sony company, sold.On the tracking plate of notebook computer, and touch is also provided on the surface on peripheral hardware mouse or external trace plate and has scanned posture.
Accompanying drawing summary
Various embodiments of the present invention are as example explanation, and are not limited only to the figure of each accompanying drawing, in the accompanying drawings, and similar element like Ref. No. referenced classes.
Figure 1A shows the first view of application program according to an embodiment of the invention and is applied to the diagram of display of the user's posture of this view.
Figure 1B shows the second view of application program according to an embodiment of the invention and is applied to the diagram of display of the user's posture of this view.
Figure 1B is the diagram that shows the first and second views of application program according to an embodiment of the invention simultaneously and be applied to two displays of the user's posture of in view.
Fig. 2 A shows the three-view diagram of application program according to an embodiment of the invention and is applied to the diagram of display of the user's posture of this view.
Fig. 2 B be show simultaneously application program according to an embodiment of the invention three-view diagram different parts and be applied to the diagram of two displays of the user's posture of this view.
Fig. 3 is the process flow diagram that the view of the selection based on application program according to an embodiment of the invention is explained posture.
Fig. 4 is the process flow diagram that posture is explained in the view of the selection based on application program according to an embodiment of the invention and the demonstration of selection.
Fig. 5 is the process flow diagram that posture is explained in the view of the selection based on application program according to another embodiment of the invention and the demonstration of selection.
Fig. 6 is the process flow diagram of explaining the posture on a plurality of displays by the mutual physics of application program according to an embodiment of the invention.
Fig. 7 is the process flow diagram that the mutual physics of use application program is according to another embodiment of the invention explained the posture on a plurality of displays.
Fig. 8 is the block diagram that is applicable to realize the computer system of process of the present invention according to an embodiment of the invention.
Fig. 9 is the block diagram of view of replacement of the computer system of Fig. 8 that is applicable to realize process of the present invention according to an embodiment of the invention.
Describe in detail
Although no matter posture, be that suspension posture touches posture, in computing environment, all more and more applied, still, they still lack the common trait of indicating equipment.They not necessarily point out where posture is pointed to.For a plurality of windows, screen, or display, always unclear, where posture is pointed to.In described example, depend on the front view of working as being presented by application program or computing system below, computing system is interpreting user posture in a different manner.Computing system determine user towards or equipment, the window checked, or screen is determined the object that posture is pointed to.The different view of same application or game can be shown with a plurality of displays simultaneously, thereby allow user to coordinate posture input from different angles.Similar approach can be applied to voice command.
Although the object that can match on voice command and graphoscope with eye tracking,, a plurality of equipment can have the display that simultaneously presents different objects.Display also can present same object in a different manner.Depend on the application program that is presented on screen when front view and depend on which screen user is seeing, system can differently suspend, touch user, or voice posture is reacted.Then, suspension and voice posture can be pointed to suitable view.
In addition, can also use suspension, touch, and voice posture causes creating alternately between the element of the physical influence on virtual objects on not shown virtual objects and screen.Under these circumstances, virtual objects can be mutual in the front and back of shown screen plane in three dimensions.The object showing may be displayed in any one in a plurality of different screens.
Three dimensions can be characterized as being target, the barrier in environments for computer games for example, and place, and wherein, due to the physics feature of those objects, they carry out with the user's posture that is applied to virtual objects alternately.Three dimensional physical effect can be expressed in this three dimensions.In this three dimensions, game and other application programs can be by from target, barriers, and the power in place is with combined from the power of suspension posture, to provide and user's more complicated interactivity or true to nature mutual.
Figure 1A is the diagram with the suspension Postural system of the display 101 that is coupled to the array of camera 103 and the array of microphone 105.In the example shown, there are two cameras and two microphones, yet, camera or the microphone of greater or lesser quantity also can be used, so that more accurately or relatively inaccurately sensed position and direction.Display can be direct view or the projection display of the display technique based on any type.As shown in the figure, camera microphone array is positioned on display and is attached to display.Yet, can use any other position.Camera and microphone can be separated from each other, and separate with display.Can under the knowledge of position with display, calibrate or configure array, so that compensation offset position.Display can be portable computer, game console, handheld-type intelligent phone, personal digital assistant, or a part for media player.Can be alternatively, display can be massive plate television indicator or computer monitor.
In shown example, display shows three submarines 109 of the undersea environment in side view.The user who is illustrated as hand 107 carries out suspension posture to indicate the torpedo 111 at shown submarine place.By the camera calibration user posture that suspends, to carry out the order of torpedo-launching.System is used the gesture library of the undersea environment that comprises possible posture.When hand is carried out posture, system compares the posture of observing and gesture library, searches nearest posture, and then, the order that inquiry is associated, such as torpedo-launching.
Figure 1B shows the same display 101 with identical camera and microphone array and identical submarine 109.For example, yet in Figure 1B, submarine is checked from top,, checks downwards from the water surface or from the shallow degree of depth towards submarine.User 107 is carrying out identical suspension posture, and this suspension posture causes towards submarine, discharging diving torpedo (depth charge) 113 downwards on the contrary.Can find out, depend on that the view of submarine is watched from the side, as in Figure 1A, or watch from top, as in Figure 1B, as shown in the figure, identical finger clamps-discharge posture can cause different actions.In the example of Figure 1A, the user's posture of watching from the side can be made with the throwing gesture clamping and releasing, to cause torpedo towards target attack.In Figure 1B, identical clamping release can cause the target of diving torpedo on screen to be lost putting.Although posture is identical,, system can determine when front view be watch from the side or from top, watch, to judge whether posture is interpreted as the release of torpedo or is interpreted as the release of diving torpedo.As a result, user can use and carry out posture simply intuitively, to cause different orders to be carried out by system.
Fig. 1 C shows two identical displays side by side.In the example shown, two displays all have camera and microphone array, yet, also can use single camera and microphone array.These arrays can be connected to display or be positioned at different positions.In this example, each display 101a and 101b show three identical submarines, and one shows the submarine 109a watching from the side, and another shows the submarine 109b watching from top.User can throw torpedo or throw in diving torpedo 113 identical submarine, and this depends on use which screen, or which screen enlivened at that time.As shown in the figure, environment presents two displays, and two displays present three identical submarines simultaneously.Posture such as clamping-discharge posture can not point out user is for which display, so that system does not know to produce torpedo order or diving torpedo order.In this example, the camera array on one or two screen can determine user is for which screen.For example, by tracking user face, eyes, focus on, or voice direction, system can determine which screen user is being primarily focused on, and then, for this screen, activates corresponding order.
Same procedure also can be used with touch-screen together with touch-surface posture, and uses together with voice command, and is not free-hand suspension posture.User can have touch-screen or touch-surface and posture is carried out in these surfaces.Again, in order to determine which view will be posture will be applied to, and system can determine that user is to where focusing on.If user is focusing on side view, so, to the posture on touch-surface, can cause torpedo to be activated.And if user is focusing on top view, so, posture can cause diving torpedo to be activated.Two views of Figure 1A and Figure 1B can represent two different views of single application program.In Fig. 1 C, application program generates two views simultaneously, and in Figure 1A and 1B, once can only see a view.In arbitrary example, system can determine that working as front view is just used by user and current display.
If only have a display, so, use single view, yet individual monitor can present different windows on a display.For example, the display of Figure 1A can be presented in a window of display, and the display of Figure 1B can be presented in another window of display.In such example, 103 camera array can determine which in two windows user focus on, and then, carries out the suitable order of user's posture.
Fig. 2 A shows different screen displays.In Fig. 2 A, with the same display 101 of identical camera 103 and microphone 105 arrays, present different views.In this example, user 107 throws virtual universe airship by suspension posture on screen.Spaceship at it, from user's suspension posture, travel certain distance after appear at screen, and its behavior by such as throwing user posture and by the object on screen, carry out control.In the example shown, have by several moon 123 around major planet 121.To user, presented the target 125 on planet, the spaceship 127 flying is approaching this target 125.
In the example of Fig. 2 A, each in planet and the moon has size relative to each other, and when spaceship is thrown by head for target 125, size determines its gravity.The moon and planet are because their gravity field changes the speed of travelling and the direction of spaceship.As a result, user can intend directly to target, to throw these spaceships, and still, they may be because of near moon off-course, or they may enter around planet or the lunar orbit, and unactual direct arrival target.
In the example of Fig. 2 A, screen display can present the three-dimensional part being generated by system.In this three dimensions, object appears in the prospect and background on screen.These can use the perception causing for object to be presented on three dimensional display closer to the shutter glasses with away from user or lensing pixel.Object also can be used perspective to be presented on two dimensional display.In two examples, screen surface represents on z axle the specific plane towards or away from user.Screen is positioned at a point on this z axle, and the object being projected by user starts in certain distance of the plane apart from screen.
When user throws object towards screen, first it is the invisible virtual objects of user.Along with it arrives the plane of the screen in three dimensions, it shows as the object of demonstration on screen.After it arrives the plane of screen, it lasts till the background that can be represented as the remote point on screen.
With can further strengthening by be included in unshowned additional object on screen in three dimensions alternately of object on screen.As a result, user can throw spaceship 127 by head for target 125, and finds that its course and speed changed before the plane of its arrival screen.The change in these objects and course will can not illustrate on screen.Yet, when spaceship arrives the plane of screen, effect will be shown.
Fig. 2 B is the identical display of Fig. 2 A and the diagram of screen.Yet, added additional screen 131.This screen is illustrated as the portable set such as smart phone or portable game system, yet it can be the display of any other type, comprises the display with basic display unit 101 same types.In this example, small displays 131 be placed in main large display 101 before.System can be determined the position of the small screen 131, and presents a three-dimensional part for the plane that is arranged in the small screen.So, for example, in Fig. 2 B, user 107 launches the spacecraft 127 towards planet 121, and particularly to target 125 places on this planet.After spaceship is thrown, first it appear in the small screen 131.
As shown in the figure, on main screen 101, sightless object 129 is visible in the small screen.This object 129 is can be to the form of another moon of spaceship 127 weight applications or other power.Along with spaceship continues in three dimensions, it will leave small displays 131, and in the near future, on large display 101, show.The interpolation of the small screen has been added new dimension to the game of this particular type.Camera array 103 or certain other degree of approach induction systems can be determined the position of the small screen in real time.Then, user everywhere mobile the small screen to see the object not being presented on main screen 101.As a result, in the example of Fig. 2 A, when throwing spaceship 127, if the course of spaceship and speed significantly change, user can be with the small screen 131 searched which object influences its path correspondingly compensation.Mobile the small screen in can the different plane on z axle, with see what be positioned at giant-screen 101 before.Can what be seen on the side of giant-screen or below by similar approach.
In the example of Fig. 2 B, also can use the method for discussing with reference to figure 1C above.The in the situation that of smart phone for example, the small screen 131 also will be equipped with user oriented one or more camera and microphone.Although these are generally used in video conference and call voice calling,, camera and microphone can be for determining that user's position is, posture is seen and explained in the position of other displays.Similarly, can determine where user's notice is focused with the camera on the small screen 131 and giant-screen 101, and according to used particular display, explain suspension posture or other postures.So, for example, replace three-dimensional different part is shown, the small screen 131 can be for different views is shown, in the example at Fig. 1 C.
Fig. 3 is for using display and user configured example process flow as illustrated in fig. 1 and 2.In Fig. 3, process starts, and user launches application, and this application program can be to play maybe can use to it any other application program of posture and a plurality of displays.303, system presents the default view of application program.This acquiescence can be come to determine in a variety of ways.305, system activates the gesture library of default view.In the case, when front view is default view, so, this gesture library can load acquiescently.Can form in a variety of ways gesture library.In one example, gesture library is question blank form, and wherein, certain camera sensor points is connected to the different order that program can be carried out.In example as discussed above, similarly posture can be used as and throws torpedo, starts diving torpedo, or the order of throwing spaceship carries out, and this depends on the particular figure presenting to user.Can carry out different orders by the far-ranging different posture in storehouse.
307, system wait is to judge whether to receive posture.This posture can be by camera, by touch-surface, by touch-screen, receive, or can in microphone, receive voice posture.If the posture of receiving, so, process marches to frame 311, there, posture is mated with the Current Library of loading.System is mated posture with a posture in storehouse, then, search corresponding order.
313, carry out this order, and 315, revise and show, with display action on the screen of the order carrying out.After carrying out the order of posture, 317, whether system detects in view and changes.The variation of view is corresponding to the different window on display or different displays.If the variation of view detected, so, process sets about presenting the variation in view, then, turns back to frame 305, to change gesture library with the variation corresponding in view.If the variation in view do not detected, so, 307, system continues to wait for new user's posture.If receive user's posture, so, as previously mentioned, 311, posture is mated with the storehouse of current loading.If do not receive posture, so, system forwards 317 to judge whether to detect the variation in view.This cycle can repeat to receive additional user's posture, and checks variation, with the use procedure in system, provides user interactions.
Fig. 4 shows for use the process flow diagram of the replacement of a plurality of views and a plurality of displays in application program.401, startup system, and start application program.403, present the default view of application program.405, determine movable display.This can determine by definite user's focus or notice.In one example, camera array determines which direction user is just seeing.For example, the angle that camera can detect face and determine face is to judge whether user is directly seeing a display or another display.In the example at Fig. 1 C, this can use the independent camera array of each display to carry out.Can be alternatively, single camera array can judge whether user is seeing a display or another display.In one example, the position of pupil that camera array is determined user is to determine user to which direction sees.In another example, camera array determines that face is pointing to which direction.Can determine which display is movable display with other user actions.For example, user can point to different displays, makes motion or various other postures of brush aloft and points out which display should be to enliven display.
407, activate gesture library and current display when front view.System loads is applicable to the input identification storehouse of this display and this view.409, system judges whether to receive user's posture, if received user's posture, so, 411, this posture is mated with Current Library.In the order of 413 execution correspondences, and the demonstration of revising in 415 generations.If do not receive user's posture, so, process F.F. is with in 417 variations that judge whether to detect view.If the variation of view do not detected, so, system returns to determine movable display 405.If the variation of view detected, so, at 419 views that present change, and process returns to determine the action of display.
The treatment scheme of Fig. 4 can make system that gesture library is mated with particular figure and particular display.As a result, application program can present a plurality of views and a plurality of display, and changes the effect of user's posture, and this depends on when front view and current display.In alternative embodiment, can only to user, present different views or different demonstrations, but not be both, this depends on realization.
Fig. 5 shows the simplification treatment scheme of using together suspension posture for the display from different.501, process starts, and receives display and selects.Display is selected can be by utilizing face detection or eye tracking to determine where user is seeing, or which direction definite user speak to carry out to by microphone array, or user can point out particular display by voice or the order that suspends.503, receive suspension posture.505, the display of determine selecting work as front view.507, the front view of working as based on selected display, select command, and 509, carry out selected order.Repeat this process, with provide user and display and with the view being provided in repetition mutual of application program.
With reference to figure 6, application program can comprise from the various mutual physics of gesture library to present the mutual of user and different view.601, start application program.603, present default view.605, activate the gesture library when front view.When posture being detected, with the corresponding order that will carry out together, the posture of being correlated with template is loaded in storer.
607, system judges whether to detect any additional display.If so, so, in the position of 621 definite these additional displays.Use camera RF (radio frequency) or IR (infrared ray) sensor to calculate it.623, based on its position, on this additional display, present view.Process is returned, to judge whether to receive user's posture 609.If do not receive user's posture, so, when process continues on backstage etc. to bide one's time, other process can continue to detect additional display and detect and work as front view.Which display is other processes also can move to detect is simultaneously enlivened, as described in the above example.
When 609 receive posture, subsequently, 611, posture is mated with Current Library.User can select projectile by posture, start projectile, changes configuration and arranges etc.When posture has been mated the gesture library of current loading, so, select command, and 613, according to the parameter of posture, revise order.So, for example, system can be used suspension posture or the touch-surface posture of other types, measures the speed of hand, the angle of the movement of hand, and the point of release of hand, or similar parameter.Then, these parameters and the order from gesture library are added, and 615, use mutual physics, determine resultant action.
The virtual objects that posture by user gives to be started by suspension posture is with speed and direction.It also can have virtual mass, air resistance, acceleration and other possible physics parameters.Then, the mutual physics (physics) in the virtual objects that system-computed is generated by pose parameter and three dimensions between shown object.Can be for the object that do not show but still be present in additional mutual of calculation and object in three dimensions.As example, the moon 129 of Fig. 2 B is the objects in the three dimensions not being presented on main screen 101.User is not in the situation that there is no additional screen 131 by the object that this object can be considered as showing.Yet this object can apply the virtual objects being generated by suspension posture alternately.617, these orders are performed, and 619, revise and show that this virtual objects is shown when virtual objects arrives display.Also revise to show, with illustrate its with three dimensions in the mutual result of other objects, comprise the object of demonstration and perhaps also have the not additional object of demonstration in three dimensions.After having carried out order, 609, system returns to receive additional user's posture.
Fig. 7 shows for use the three-dimensional simplification treatment scheme of object and power together with user's posture.701, process starts, and application program is activated.703, generate the three dimensions that comprises one or more objects and one or more power.In the example of Fig. 2 B, these are to liking planet and the moon with gravity.Yet, can formation range different types of object widely, and can use different types of power.705, system is determined can be for the display of system.707, determine the display that these are available relative position and towards, and 709, on available display, present a three-dimensional part.Whether size that can be based on display and position and display allow to present 3-D view or two dimension view, determine the three-dimensional amount presenting on display.711, system judges whether to receive user's posture.If no, it waits for user's posture.If receive user's posture, so, 713, in three dimensions, generate the track of produced virtual objects.715, on available display, show a part for generated track.As mentioned above, as the result of posture and the virtual objects starting can travel through a three-dimensional part, and can be not visible on any display, and can travel through visible three-dimensional another part on display.System can be determined the position of virtual objects when it travels through three dimensions, and this position and the three-dimensional part presenting on available display are compared.So, object can enter and leave display, and still travels through the consistent track in three dimensions.After presenting generated track, 711, process returns to receive additional user's posture.
Depend on specific implementation, have the various effect that can provide with mutual.Expressed some in these, yet various embodiments of the present invention are not limited only to this.
Table
Fig. 8 is the block diagram that can support the computing environment that operates as discussed above.Module and system can comprise shown in Fig. 9 those various hardware architecture and form factor in realize.
Command execution module 801 comprises CPU (central processing unit), in order to buffer memory and fill order allocating task between shown other modules and system.It can comprise in the middle of instruction storehouse, storage and the buffer memory of net result, and the mass storage of storage application program and operating system.The central authorities that command execution module can also be served as system are coordinated and task allocation unit.
Screen presents module 821 rendered object on one or more screens, for user, sees.It goes for receiving data from described virtual objects behavior module 804 below, and on suitable one or more screens, presents virtual objects and any other object and power.So, data from virtual objects behavior module will determine, for example, and the position of virtual objects and the posture being associated, power and object and dynamics (dynamics), and correspondingly, screen presents module and will on screen, describe virtual objects and the object being associated and environment.Screen presents module and can also be applicable to receive data from described adjacent screen perspective module 807 below, if with or describe virtual objects target touchdown area---virtual objects can move on to the display of adjacent screen perspective module equipment associated with it.So, for example, if virtual objects moves to auxiliary screen from main screen, adjacent screen perspective module 2 can send to data screen and presents module with for example one or more target touchdown areas on user's the track that hand moves or eyes move with ghost form hint virtual objects.
Hand and arm posture that object and gesture recognition system 822 go for identification and follow the tracks of user.Such module can be used to identify hand, finger, finger gesture, hand moves and the position of palmistry for display.For example, object and gesture recognition module can for example judge that user has made body part posture, virtual objects is transmitted on one or the other screen in a plurality of screens, or user has made body part posture virtual objects is moved to the besel (bezel) of one or the other screen in a plurality of screens.Object and gesture recognition system can be coupled to camera or camera array, and microphone or microphone array, touch-screen or touch-surface, or indicating equipment, or some combination in these projects, to detect posture and the order from user.
The touch-screen of object and gesture recognition system or touch-surface can comprise touch panel sensor.Data from sensor can be fed to hardware, software, firmware or their combination, user's hand is mapped to the corresponding dynamic behaviour of virtual objects in screen or lip-deep touch posture.Can be by sensing data for momentum and inertia factor, with the input of the hand based on from user, the speed such as user's finger with respect to the brush of screen, determines the various momentum behaviors of virtual objects.Clamp posture and can be interpreted as promoting the order of virtual objects from display screen, or start generate the virtual binding being associated with virtual objects or zoom in or out on display.Can use and not have one or more cameras of the advantage of touch-surface to generate similarly order by object and gesture recognition system.
Notice direction module 823 can be equipped with camera or other sensors with follow the tracks of user's face or the position of hand or towards.When sending posture or voice command, system can be determined the suitable screen of posture.In one example, camera is installed near each display, whether to detect user towards this display.If so, so, notice direction module information is provided to object and gesture recognition module 822, to guarantee that posture or order are associated with the suitable storehouse of enlivening display.Similarly, if user shifts sight from whole screens, so, can ignore order.
Equipment proximity detection module 825 can be used proximity sensor, compass, GPS (GPS) receiver, individual territory net radio, and the sensor of other types, and triangulation and other technologies are determined the degree of approach of other equipment.Once near the equipment detecting, just can be registered to system by it, its type can be confirmed as input equipment or display device or both.For input equipment, then, can be by the market demand receiving in object posture and recognition system 822.For display device, it can be considered by adjacent screen perspective module 807.
Virtual objects behavior module 804 is applicable to receive the input from object velocity and direction module, and such input is applied to the virtual objects shown in display.So, for example, object and gesture recognition system are by interpreting user posture, and by the movement capturing of user's hand being mapped to the movement of identification, virtual objects tracker module can be associated the position of virtual objects and movement with the movement of object and gesture recognition system identification, object and speed and direction module (Object and Velocity and Direction Module) will catch the dynamics of the movement of virtual objects, and virtual objects behavior module is by the input receiving from object and speed and direction module, to generate the data with the input corresponding to from object and speed and direction module by the data of the movement of guiding virtual objects.
On the other hand, virtual objects tracker module 806 can be based on from object and gesture recognition module input, where be applicable to follow the tracks of near three dimensions that virtual objects should be arranged in display, and which body part of user is just holding virtual objects.When virtual objects tracker module 806 can for example move across screen at virtual objects and move, follow the tracks of virtual objects between screen, and which body part of following the tracks of user is just holding this virtual objects.The body part that virtual objects is just being held in tracking allows the suspension of recognizing continuously body part to move, and finally recognizes thus whether virtual objects has been released in one or more screens.
Posture and view and screen synch module 808 receive view and screen or both selections from notice direction module 823, and in some cases, receive voice command, and to determine which view is to enliven view, and which screen is to enliven screen.Then, for object and gesture recognition system 822, it causes relevant gesture library to be loaded.For given view, the various views of the application program on one or more screens can be associated with gesture library or one group of posture template of replacing.As example, in Figure 1A, clamp-discharge posture and start torpedo, and in Figure 1B, identical posture starts diving torpedo.
Can comprise that the adjacent screen perspective module 807 that maybe can be coupled to equipment proximity detection module 825 goes for determining that a display is with respect to angle and the position of another display.The projection display comprises, for example, projects to the image on wall or on screen.Near the degree of approach of the screen detecting and from the corresponding angle of the display of wherein projection or towards ability can for example utilize infrared origin and receiver, or electromagnetism or light detect sensing capability and complete.For allowing with the technology that touches the projection display of input, can analyzing, import video into determine the position of the projection display and to proofread and correct by showing at an angle caused distortion.Can use accelerometer, magnetometer, compass, or camera determines the angle that equipment is being held, and infrared origin and camera can allow screen equipment with respect to adjacent equipment upper sensor towards being determined.So, adjacent screen perspective module 807 can determine that adjacent screen is with respect to the coordinate of its oneself screen coordinate.So, adjacent screen perspective module can determine which equipment is closer to each other, and for move the further potential target of one or more virtual objects across screen.Adjacent screen perspective module can also further allow the position of screen to be associated with the three-dimensional model that represents whole existing objects and virtual objects.
Object and speed and direction module 803 go for by receiving the input from virtual objects tracker module, the dynamics of the virtual objects that estimation is just being moved, such as its track, speed (be linear or have angle), momentum (be linear or have angle), etc.Once object and speed and direction module can also further be applicable to degree of drawing by for example estimated acceleration, deflection, virtual binding etc. and by the dynamics of user's body part release virtual objects, estimate the dynamics of any physics educational level.Object and speed and direction module can also change to estimate the speed of object by image motion, size and angle, such as the speed of hand and finger.
Momentum and inertia module 802 can be used image motion, image size and the angle of object in the plane of delineation or three dimensions to change, and estimate speed and the direction of object in space or on display.Momentum and inertia module are coupled to object and gesture recognition system 822, estimate by hand, finger, and the speed of the posture of other body parts execution, then, apply these and estimate, to determine momentum and the speed of the virtual objects of the impact that is subject to posture.
Mutual and the effects module 805 of 3-D image follow the tracks of users with seem to stretch out one or more screens 3-D image alternately.Can calculate the impact of the object in z axle (towards the plane of screen with away from this plane) and these objects to relative effect each other.For example, the object of being thrown by user's posture can be subject to the impact of the 3-D object in prospect before the plane of virtual objects arrival screen.These objects may change direction or the speed of projectile or it is damaged completely.Object can be in display one or more on prospect in the mutual and effects module of 3-D image present.
Fig. 9 is the block diagram of the computing system such as personal computer, game console, smart phone or portable game device.Computer system 900 comprises for the bus of transmission of information or other communicators 901, and for the treatment of information and the treating apparatus such as microprocessor 902 bus 901 couplings.Computer system can be utilized to be specifically designed to as described above by parallel pipeline and presents the graphic process unit 903 of figure and strengthen for the mutual physics processor 905 of computational physics.These processors can be included in central processing unit 902, or provide as one or more independent processors.
Computer system 900 also comprises the primary memory 904 that is coupled to bus 901, as random access memory (RAM) or other dynamic data storage equipment, for storage information and the instruction for the treatment of to be carried out by processor 902.Primary memory 406 can also be stored temporary variable or other intermediate informations for the implementation of the instruction treating to be carried out by processor.Computer system can also comprise the nonvolatile memory 906 such as ROM (read-only memory) (ROM) or other static data storage device that is coupled to bus, is used to processor storage static information and instruction.
Such as disk, CD, or the mass storage 907 of solid-state array and so on and corresponding driver also can be coupled to the bus of computer system, for storage information and instruction.Computer system also can arrive display device or monitor 921 by bus coupling, such as liquid crystal display (LCD) or Organic Light Emitting Diode (OLED) array, for showing information to user.For example, except various views and user interactions as discussed above, can also on display device, to user, present figure and text indication and other information of installment state, mode of operation.
Conventionally, the user input device such as the keyboard with alphanumeric key, function key and other keys, can be coupled to bus, for information and command selection are delivered to processor.Additional user input device can comprise the cursor control inputs equipment that is coupled to bus, as mouse, trace ball, tracking plate or cursor direction key, be used for to processor direction of transfer information and command selection, and for controlling the cursor movement on display 921.
Camera and microphone array 923 are coupled to bus to observe posture, and record audio and video, receive visual and voice command, as mentioned above.
Communication interface 925 is also coupled to bus 901.Communication interface can comprise modulator-demodular unit, network interface unit, or other known interfacing equipments, such as those for the Ethernet that is coupled, token-ring network,, or the physics of other types is wired or wireless attached, be used for, for example, the communication link of provide support this locality or wide area network (LAN or WAN).So, computer system also can be passed through general networks foundation structure, comprises, for example, Intranet or the Internet, be coupled to much peripherals, other clients, or chain of command or control desk, or server.
Should be appreciated that, for some, realize, can first-selectedly than example as described above, be equipped with still less or more system.Therefore, depend on several factors, such as price constraints, performance requirement, technology, improve, or other situations, example system 800 and 900 be configured in that each are different between realizing.
Each embodiment may be implemented as any one or the combination in following: the one or more microchips or the integrated circuit that use motherboard, firmware hardwired logic interconnection, by memory device for storing the software of being carried out by microprocessor, firmware, special IC (ASIC), and/or field programmable gate array (FPGA).Term " logic " can comprise, for example, and the combination of software, hardware and/or software and hardware.
Each embodiment, for example, can be used as computer program provides, this computer program can comprise has stored the medium that one or more machine readables of machine-executable instruction are got thereon, when by such as computing machine, computer network, or one or more machines of other electronic equipments and so on are when carry out, instruction can cause one or more machines to be realized according to the operation of various embodiments of the present invention.The medium that machine readable is got can comprise, but be not limited to, floppy disk, CD, CD-ROM (compact disk-ROM (read-only memory)), and magneto-optic disk, ROM (ROM (read-only memory)), RAM (random access memory), EPROM (Erasable Programmable Read Only Memory EPROM), EEPROM (EEPROM (Electrically Erasable Programmable Read Only Memo)), magnetic or optical card, flash memory, or be suitable for storing the medium that the medium/machine readable of any type of machine-executable instruction is got.
In addition, each embodiment can also be used as computer program and download, wherein, program can be passed one or more data-signals of being realized and/or modulated by carrier wave or (for example pass through communication link, modulator-demodular unit and/or network connect) other propagation mediums are from remote computer (for example, server) be transferred to requesting party's computing machine (for example, client).Correspondingly, as used herein, the medium that machine readable is got is passable, does not still require and comprises such carrier wave.
Quoting of " embodiment ", " embodiment ", " example embodiment ", " each embodiment " etc. represented to described various embodiments of the present invention can comprise special characteristic, structure or characteristic, but each embodiment can not necessarily comprise this special characteristic, structure or feature.Further, some embodiment can have certain that describe for other embodiment, all feature or neither one feature.
In description and claims below, can use term " coupling " with and derivative." coupling " is used to represent two or more elements and cooperates with one another or carry out alternately, still, can or can not have the assembly of intermediate physical or electricity between them.
As used in claims, unless otherwise mentioned, use ordinal number " first ", " second ", " 3rd " etc. to describe common element, the different example that only represents similar elements is cited, and do not intend to imply that described element like this must be by giving definite sequence, no matter be in time, spatially, aspect sequence or in any other mode.
Accompanying drawing and aforesaid description have provided the example of each embodiment.Those skilled in the art will appreciate that, one or more in described element can be merged into individual feature element.Can be alternatively, some element can be split into a plurality of function element.The element of an embodiment can be added in another embodiment.For example, the order of process described herein can change, and is not limited only to mode described herein.In addition, the action of any one process flow diagram needn't realize according to shown order; Also not necessarily all action also needs to carry out.In addition those actions that, do not rely on other actions also can be carried out concurrently with other actions.The scope of each embodiment is absolutely not by these concrete example limits.A lot of variation, whether no matter in instructions, explicitly provides, such as structure, size, and the difference of the use of material and so on, be also possible.It is extensive like that the scope of each embodiment is at least given by following claims.

Claims (39)

1. a method, comprising:
In the user interface system of computing system, receive suspension posture;
Determine the front view of working as on display;
Determined view is loaded to gesture library;
From loaded gesture library, select the order corresponding to the described described posture when front view; And
Carry out selected order.
2. the method for claim 1, is characterized in that, also comprises for described posture, determines display selection, and wherein select command comprises the order of selecting for selected display.
3. method as claimed in claim 2, is characterized in that, determines that display selects to comprise by utilizing camera to observe described user's position, determine user side to.
4. method as claimed in claim 3, is characterized in that, observation place comprises the direction of face's sensing of determining user.
5. method as claimed in claim 2, is characterized in that, observation place comprises the direction of the eyes sensing of determining described user.
6. the method for claim 1, is characterized in that, determines that display selects to comprise by utilizing microphone to observe the direction of described user's described voice, determine user side to.
7. the method for claim 1, is characterized in that, also comprises for each display, load gesture library, and wherein select command comprises select command from the described gesture library of selected display.
8. the method for claim 1, is characterized in that, described suspension posture comprises that finger is mobile, hand moves, arm moves, health moves, and at least one in verbal order.
9. the method for claim 1, it is characterized in that, also comprise reception voice command, wherein determine the direction that front view comprises that the face of sensing user points to of working as on display, and wherein select command comprises order and the described voice command of selecting described posture.
10. the method for claim 1, is characterized in that, described display is rendered as 3-D view by image.
11. 1 kinds of machine readable medias of storing instruction thereon, described instruction, when being carried out by described computing machine, is carried out described computing machine and is comprised following operation:
In the user interface system of computing system, receive suspension posture;
Determine the front view of working as on display;
Determined view is loaded to gesture library;
From loaded gesture library, select the order corresponding to the described described posture when front view; And
Carry out selected order.
12. media as claimed in claim 11, is characterized in that, described operation also comprises for described posture, determines display selection, and wherein select command comprises the order of selecting for selected display.
13. media as claimed in claim 11, is characterized in that, described operation also comprises for each and showing, load gesture library, and wherein select command comprises select command from the described gesture library of selected display.
14. 1 kinds of equipment, comprising:
For receiving object and the gesture recognition system of suspension posture;
For determine on display when front view and determined view is loaded to posture and view and the screen synch module of gesture library;
For the gesture library from loaded, select described object and the gesture recognition module corresponding to the order of the described described posture when front view; And
For carrying out the command execution module of selected order.
15. equipment as claimed in claim 14, is characterized in that, also comprise for the direction of the face's sensing by definite user and determine the notice direction module that the display of described posture and view and synchronization module is selected.
16. equipment as claimed in claim 15, is characterized in that, described notice direction module is determined the direction that described user's eyes point to.
17. 1 kinds of methods, comprising:
In the user interface system of computing system, receive suspension posture;
For each in a plurality of displays, load gesture library;
For described posture, determine the selection of in described a plurality of displays;
From described gesture library, select the order corresponding to the described posture of selected display; And
Carry out selected order.
18. methods as claimed in claim 17, is characterized in that, determine that display selection comprises the direction of face's sensing of determining user.
19. methods as claimed in claim 3, is characterized in that, direction of observation comprises the direction of the eyes sensing of determining described user.
20. methods as claimed in claim 17, is characterized in that, determine that display selects to comprise by utilizing microphone to observe the direction of described user's described voice, determine user side to.
21. 1 kinds of methods, comprising:
Be created on the three dimensions with respect to the different position of user and the virtual objects that will be handled by described user with a plurality of objects;
The viewing area that presents at least a portion with a plurality of different objects on display;
Detect described user for the suspension posture of described virtual objects, described virtual objects is positioned at outside described viewing area;
Based on described suspension posture, generate the track of described virtual objects in described three dimensions, described track comprise with described three dimensions in described a plurality of objects in object mutual; And
On described viewing area, present a part for generated track.
22. methods as claimed in claim 21, it is characterized in that, described viewing area corresponding in described three dimensions with described user distance range apart, and a part that wherein presents generated track comprises the described part in described distance range of the described track that presents described virtual objects.
23. methods as claimed in claim 21, it is characterized in that, generate described three dimensions and comprise generating to there is the three dimensions closer to described user's object than the described object presenting on described viewing area, and wherein generate track and comprise, comprise with described viewing area in the object that do not present mutual.
24. methods as claimed in claim 21, is characterized in that, the included model that comprises alternately accelerating force.
25. methods as claimed in claim 24, is characterized in that, described accelerating force comprises gravity, electromagnetism, and at least one in elastic force.
26. methods as claimed in claim 24, is characterized in that, present the expression that viewing area comprises the relative quantity of the accelerating force that presents the object that belongs to described viewing area.
27. methods as claimed in claim 21, is characterized in that, included comprise alternately capillary model.
28. methods as claimed in claim 21, is characterized in that, the included solid that comprises alternately collides.
29. methods as claimed in claim 21, it is characterized in that, As time goes on generating three-dimensional space comprises determines mutual between the object in described a plurality of objects, and on described viewing area, presents the described mutual caused change in location As time goes in described three dimensions.
30. methods as claimed in claim 21, is characterized in that, also comprise:
Determine the position of second display;
Determined position is associated with described three dimensions; And
The second viewing area that presents the second portion with described three-dimensional described a plurality of different objects on described second display.
31. methods as claimed in claim 30, is characterized in that, determine that the described position of described second display comprises the described position of determining described second display with the described position of camera and described the first display.
32. methods as claimed in claim 30, is characterized in that, determine that the described position of described second display comprises the described position of determining described second display with the radio transceiver that is coupled to described second display.
33. methods as claimed in claim 21, is characterized in that, described viewing area is rendered as 3-D view by described display.
34. 1 kinds of machine readable medias of storing instruction thereon, described instruction, when being carried out by described computing machine, is carried out described computing machine and is comprised following operation:
Be created on the three dimensions with respect to the different position of user and the virtual objects that will be handled by described user with a plurality of objects;
The viewing area that presents at least a portion with a plurality of different objects on display;
Detect described user for the suspension posture of described virtual objects, described virtual objects is positioned at outside described viewing area;
Based on described suspension posture, generate the track of described virtual objects in described three dimensions, described track comprise with described three dimensions in described a plurality of objects in object mutual; And
On described viewing area, present a part for generated track.
35. media as claimed in claim 34, is characterized in that, described operation also comprises:
Determine the position of second display;
Determined position is associated with described three dimensions; And
The second viewing area that presents the second portion with described three-dimensional described a plurality of different objects on described second display.
36. media as claimed in claim 35, is characterized in that, described viewing area is rendered as 3-D view by described display.
37. 1 kinds of equipment, comprising:
For being created on three-dimensional object velocity and the direction module with respect to the different position of user and the virtual objects that will be handled by described user with a plurality of objects;
For present the screen of the viewing area of at least a portion with a plurality of different objects on display, present module;
Object and gesture recognition system for detection of described user for the suspension posture of described virtual objects, described virtual objects is positioned at outside described viewing area;
For generating described virtual objects in the virtual objects behavior module of the track of described three dimensions based on described suspension posture, described track comprise with described three dimensions in described a plurality of objects in object mutual; And
For present the described screen of a part for generated track on described viewing area, present module.
38. equipment as claimed in claim 37, it is characterized in that, described viewing area corresponding in described three dimensions with described user distance range apart, described equipment also comprises the mutual and effects module of the 3-D image of described track for the presenting described virtual objects described part in described distance range.
39. equipment as claimed in claim 38, it is characterized in that, described object velocity and direction module generate has the three dimensions closer to described user's object than the described object presenting on described viewing area, and wherein said virtual objects behavior module generate comprise with described viewing area in the mutual track of the object that do not present.
CN201180076283.2A 2011-11-23 2011-11-23 Posture input with multiple views, display and physics Active CN104067201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511009413.7A CN105653031B (en) 2011-11-23 2011-11-23 Posture input with multiple views, display and physics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2011/062140 WO2013077883A1 (en) 2011-11-23 2011-11-23 Gesture input with multiple views, displays and physics

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201511009413.7A Division CN105653031B (en) 2011-11-23 2011-11-23 Posture input with multiple views, display and physics

Publications (2)

Publication Number Publication Date
CN104067201A true CN104067201A (en) 2014-09-24
CN104067201B CN104067201B (en) 2018-02-16

Family

ID=48470179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201180076283.2A Active CN104067201B (en) 2011-11-23 2011-11-23 Posture input with multiple views, display and physics

Country Status (5)

Country Link
US (4) US9557819B2 (en)
EP (1) EP2783269B1 (en)
KR (2) KR101760804B1 (en)
CN (1) CN104067201B (en)
WO (1) WO2013077883A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844705A (en) * 2016-03-29 2016-08-10 联想(北京)有限公司 Three-dimensional virtual object model generation method and electronic device
CN106648038A (en) * 2015-10-30 2017-05-10 北京锤子数码科技有限公司 Method and apparatus for displaying interactive object in virtual reality
CN107734385A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Video broadcasting method, device and electronic installation
CN110908568A (en) * 2018-09-18 2020-03-24 网易(杭州)网络有限公司 Control method and device for virtual object
CN110969658A (en) * 2018-09-28 2020-04-07 苹果公司 Locating and mapping using images from multiple devices
CN112437910A (en) * 2018-06-20 2021-03-02 威尔乌集团 Holding and releasing virtual objects

Families Citing this family (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5715007B2 (en) * 2011-08-29 2015-05-07 京セラ株式会社 Display device
KR101760804B1 (en) 2011-11-23 2017-07-24 인텔 코포레이션 Gesture input with multiple views, displays and physics
WO2013095678A1 (en) 2011-12-23 2013-06-27 Intel Corporation Mechanism to provide feedback regarding computing system command gestures
US9389682B2 (en) 2012-07-02 2016-07-12 Sony Interactive Entertainment Inc. Methods and systems for interaction with an expanded information space
FR2995704B1 (en) * 2012-09-19 2015-12-25 Inst Nat De Sciences Appliquees INTERACTIVITY MODE SELECTION METHOD
US9412375B2 (en) 2012-11-14 2016-08-09 Qualcomm Incorporated Methods and apparatuses for representing a sound field in a physical space
US9910499B2 (en) * 2013-01-11 2018-03-06 Samsung Electronics Co., Ltd. System and method for detecting three dimensional gestures to initiate and complete the transfer of application data between networked devices
US10042510B2 (en) * 2013-01-15 2018-08-07 Leap Motion, Inc. Dynamic user interactions for display control and measuring degree of completeness of user gestures
US8738101B1 (en) * 2013-02-06 2014-05-27 Makor Issues And Rights Ltd. Smartphone-tablet hybrid device
US9785228B2 (en) * 2013-02-11 2017-10-10 Microsoft Technology Licensing, Llc Detecting natural user-input engagement
CN104969148B (en) 2013-03-14 2018-05-29 英特尔公司 User interface gesture control based on depth
CN104143075A (en) * 2013-05-08 2014-11-12 光宝科技股份有限公司 Gesture judging method applied to electronic device
JP2016528579A (en) * 2013-05-24 2016-09-15 トムソン ライセンシングThomson Licensing Method and apparatus for rendering an object on multiple 3D displays
KR102102760B1 (en) * 2013-07-16 2020-05-29 엘지전자 주식회사 Display apparatus for rear projection-type capable of detecting touch input and gesture input
WO2015038128A1 (en) 2013-09-12 2015-03-19 Intel Corporation System to account for irregular display surface physics
US9507429B1 (en) * 2013-09-26 2016-11-29 Amazon Technologies, Inc. Obscure cameras as input
US10152136B2 (en) * 2013-10-16 2018-12-11 Leap Motion, Inc. Velocity field interaction for free space gesture interface and control
US9891712B2 (en) 2013-12-16 2018-02-13 Leap Motion, Inc. User-defined virtual interaction space and manipulation of virtual cameras with vectors
CN104951051B (en) * 2014-03-24 2018-07-06 联想(北京)有限公司 A kind of information processing method and electronic equipment
US10222866B2 (en) * 2014-03-24 2019-03-05 Beijing Lenovo Software Ltd. Information processing method and electronic device
CN104951211B (en) * 2014-03-24 2018-12-14 联想(北京)有限公司 A kind of information processing method and electronic equipment
WO2015152749A1 (en) * 2014-04-04 2015-10-08 Empire Technology Development Llc Relative positioning of devices
US9958529B2 (en) 2014-04-10 2018-05-01 Massachusetts Institute Of Technology Radio frequency localization
US9740338B2 (en) * 2014-05-22 2017-08-22 Ubi interactive inc. System and methods for providing a three-dimensional touch screen
US9575560B2 (en) 2014-06-03 2017-02-21 Google Inc. Radar-based gesture-recognition through a wearable device
US9921660B2 (en) * 2014-08-07 2018-03-20 Google Llc Radar-based gesture recognition
US9811164B2 (en) 2014-08-07 2017-11-07 Google Inc. Radar-based gesture sensing and data transmission
US9588625B2 (en) 2014-08-15 2017-03-07 Google Inc. Interactive textiles
US10268321B2 (en) 2014-08-15 2019-04-23 Google Llc Interactive textiles within hard objects
US11169988B2 (en) 2014-08-22 2021-11-09 Google Llc Radar recognition-aided search
US9778749B2 (en) 2014-08-22 2017-10-03 Google Inc. Occluded gesture recognition
US10747426B2 (en) * 2014-09-01 2020-08-18 Typyn, Inc. Software for keyboard-less typing based upon gestures
US9600080B2 (en) 2014-10-02 2017-03-21 Google Inc. Non-line-of-sight radar-based gesture recognition
US9898078B2 (en) * 2015-01-12 2018-02-20 Dell Products, L.P. Immersive environment correction display and method
US10101817B2 (en) * 2015-03-03 2018-10-16 Intel Corporation Display interaction detection
US10016162B1 (en) 2015-03-23 2018-07-10 Google Llc In-ear health monitoring
US9983747B2 (en) 2015-03-26 2018-05-29 Google Llc Two-layer interactive textiles
US9848780B1 (en) 2015-04-08 2017-12-26 Google Inc. Assessing cardiovascular function using an optical sensor
US10139916B2 (en) 2015-04-30 2018-11-27 Google Llc Wide-field radar-based gesture recognition
KR102229658B1 (en) 2015-04-30 2021-03-17 구글 엘엘씨 Type-agnostic rf signal representations
JP6427279B2 (en) 2015-04-30 2018-11-21 グーグル エルエルシー RF based fine motion tracking for gesture tracking and recognition
US10080528B2 (en) 2015-05-19 2018-09-25 Google Llc Optical central venous pressure measurement
US9693592B2 (en) 2015-05-27 2017-07-04 Google Inc. Attaching electronic components to interactive textiles
US10088908B1 (en) 2015-05-27 2018-10-02 Google Llc Gesture detection and interactions
US10376195B1 (en) 2015-06-04 2019-08-13 Google Llc Automated nursing assessment
US10379639B2 (en) 2015-07-29 2019-08-13 International Business Machines Corporation Single-hand, full-screen interaction on a mobile device
KR102449838B1 (en) 2015-09-01 2022-09-30 삼성전자주식회사 Processing method and processing apparatus of 3d object based on user interaction
US10817065B1 (en) 2015-10-06 2020-10-27 Google Llc Gesture recognition using multiple antenna
CN107851932A (en) 2015-11-04 2018-03-27 谷歌有限责任公司 For will be embedded in the connector of the externally connected device of the electronic device in clothes
US10976819B2 (en) 2015-12-28 2021-04-13 Microsoft Technology Licensing, Llc Haptic feedback for non-touch surface interaction
US11188143B2 (en) * 2016-01-04 2021-11-30 Microsoft Technology Licensing, Llc Three-dimensional object tracking to augment display area
US10503968B2 (en) 2016-03-22 2019-12-10 Intel Corporation Identifying a local coordinate system for gesture recognition
US10628505B2 (en) * 2016-03-30 2020-04-21 Microsoft Technology Licensing, Llc Using gesture selection to obtain contextually relevant information
WO2017192167A1 (en) 2016-05-03 2017-11-09 Google Llc Connecting an electronic component to an interactive textile
US10175781B2 (en) 2016-05-16 2019-01-08 Google Llc Interactive object with multiple electronics modules
US10133474B2 (en) 2016-06-16 2018-11-20 International Business Machines Corporation Display interaction based upon a distance of input
CN107728482A (en) * 2016-08-11 2018-02-23 阿里巴巴集团控股有限公司 Control system, control process method and device
US10297085B2 (en) 2016-09-28 2019-05-21 Intel Corporation Augmented reality creations with interactive behavior and modality assignments
US10331190B2 (en) 2016-11-09 2019-06-25 Microsoft Technology Licensing, Llc Detecting user focus on hinged multi-screen device
US10303417B2 (en) * 2017-04-03 2019-05-28 Youspace, Inc. Interactive systems for depth-based input
US10303259B2 (en) 2017-04-03 2019-05-28 Youspace, Inc. Systems and methods for gesture-based interaction
US10437342B2 (en) 2016-12-05 2019-10-08 Youspace, Inc. Calibration systems and methods for depth-based interfaces with disparate fields of view
WO2018106276A1 (en) * 2016-12-05 2018-06-14 Youspace, Inc. Systems and methods for gesture-based interaction
US10579150B2 (en) 2016-12-05 2020-03-03 Google Llc Concurrent detection of absolute distance and relative movement for sensing action gestures
US10475454B2 (en) * 2017-09-18 2019-11-12 Motorola Mobility Llc Directional display and audio broadcast
EP3694613A4 (en) * 2017-11-09 2021-08-04 Bo & Bo Ltd. System, device and method for external movement sensor communication
US10937240B2 (en) 2018-01-04 2021-03-02 Intel Corporation Augmented reality bindings of physical objects and virtual objects
US11875012B2 (en) 2018-05-25 2024-01-16 Ultrahaptics IP Two Limited Throwable interface for augmented reality and virtual reality environments
US11500103B2 (en) 2018-10-23 2022-11-15 Lg Electronics Inc. Mobile terminal
WO2020166740A1 (en) * 2019-02-13 2020-08-20 엘지전자 주식회사 Mobile terminal
US11620044B2 (en) 2018-10-23 2023-04-04 Lg Electronics Inc. Mobile terminal
WO2020166739A1 (en) * 2019-02-13 2020-08-20 엘지전자 주식회사 Mobile terminal
US11899448B2 (en) * 2019-02-21 2024-02-13 GM Global Technology Operations LLC Autonomous vehicle that is configured to identify a travel characteristic based upon a gesture
US10884487B2 (en) * 2019-03-21 2021-01-05 Microsoft Technology Licensing, Llc Position based energy minimizing function
WO2020195292A1 (en) * 2019-03-26 2020-10-01 ソニー株式会社 Information processing device that displays sensory organ object
US11714544B2 (en) 2020-06-25 2023-08-01 Microsoft Technology Licensing, Llc Gesture definition for multi-screen devices
WO2024054915A1 (en) * 2022-09-09 2024-03-14 Snap Inc. Shooting interaction using augmented reality content in a messaging system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040193413A1 (en) * 2003-03-25 2004-09-30 Wilson Andrew D. Architecture for controlling a computer using hand gestures
US20100195869A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking
CN102184020A (en) * 2010-05-18 2011-09-14 微软公司 Method for manipulating posture of user interface and posture correction

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
US9229474B2 (en) * 2010-10-01 2016-01-05 Z124 Window stack modification in response to orientation change
US8072470B2 (en) * 2003-05-29 2011-12-06 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
US7532196B2 (en) 2003-10-30 2009-05-12 Microsoft Corporation Distributed sensing techniques for mobile devices
US7365737B2 (en) * 2004-03-23 2008-04-29 Fujitsu Limited Non-uniform gesture precision
US7394459B2 (en) * 2004-04-29 2008-07-01 Microsoft Corporation Interaction between objects and a virtual environment display
US9250703B2 (en) * 2006-03-06 2016-02-02 Sony Computer Entertainment Inc. Interface with gaze detection and voice input
US8577085B2 (en) 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US8565476B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US8565477B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US8267781B2 (en) 2009-01-30 2012-09-18 Microsoft Corporation Visual target tracking
US8682028B2 (en) 2009-01-30 2014-03-25 Microsoft Corporation Visual target tracking
US8588465B2 (en) 2009-01-30 2013-11-19 Microsoft Corporation Visual target tracking
US9256282B2 (en) * 2009-03-20 2016-02-09 Microsoft Technology Licensing, Llc Virtual object manipulation
US8941625B2 (en) * 2009-07-07 2015-01-27 Elliptic Laboratories As Control using movements
US9268404B2 (en) * 2010-01-08 2016-02-23 Microsoft Technology Licensing, Llc Application gesture interpretation
US9507418B2 (en) 2010-01-21 2016-11-29 Tobii Ab Eye tracker based contextual action
US8633890B2 (en) * 2010-02-16 2014-01-21 Microsoft Corporation Gesture detection based on joint skipping
WO2012021902A2 (en) * 2010-08-13 2012-02-16 Net Power And Light Inc. Methods and systems for interaction through gestures
US9122307B2 (en) * 2010-09-20 2015-09-01 Kopin Corporation Advanced remote control of host application using motion and voice commands
US8994718B2 (en) * 2010-12-21 2015-03-31 Microsoft Technology Licensing, Llc Skeletal control of three-dimensional virtual world
US8717318B2 (en) * 2011-03-29 2014-05-06 Intel Corporation Continued virtual links between gestures and user interface elements
US20120257035A1 (en) * 2011-04-08 2012-10-11 Sony Computer Entertainment Inc. Systems and methods for providing feedback by tracking user gaze and gestures
US9733791B2 (en) * 2011-09-12 2017-08-15 Microsoft Technology Licensing, Llc Access to contextually relevant system and application settings
KR101760804B1 (en) 2011-11-23 2017-07-24 인텔 코포레이션 Gesture input with multiple views, displays and physics

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040193413A1 (en) * 2003-03-25 2004-09-30 Wilson Andrew D. Architecture for controlling a computer using hand gestures
US20100195869A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking
CN102184020A (en) * 2010-05-18 2011-09-14 微软公司 Method for manipulating posture of user interface and posture correction

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106648038A (en) * 2015-10-30 2017-05-10 北京锤子数码科技有限公司 Method and apparatus for displaying interactive object in virtual reality
CN105844705A (en) * 2016-03-29 2016-08-10 联想(北京)有限公司 Three-dimensional virtual object model generation method and electronic device
CN105844705B (en) * 2016-03-29 2018-11-09 联想(北京)有限公司 A kind of three-dimensional object model generation method and electronic equipment
CN107734385A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Video broadcasting method, device and electronic installation
CN107734385B (en) * 2017-09-11 2021-01-12 Oppo广东移动通信有限公司 Video playing method and device and electronic device
CN112437910A (en) * 2018-06-20 2021-03-02 威尔乌集团 Holding and releasing virtual objects
CN110908568A (en) * 2018-09-18 2020-03-24 网易(杭州)网络有限公司 Control method and device for virtual object
CN110969658A (en) * 2018-09-28 2020-04-07 苹果公司 Locating and mapping using images from multiple devices
CN110969658B (en) * 2018-09-28 2024-03-29 苹果公司 Localization and mapping using images from multiple devices

Also Published As

Publication number Publication date
EP2783269A4 (en) 2016-04-13
US10963062B2 (en) 2021-03-30
WO2013077883A1 (en) 2013-05-30
EP2783269B1 (en) 2018-10-31
US20210286437A1 (en) 2021-09-16
US20230236670A1 (en) 2023-07-27
US20130278499A1 (en) 2013-10-24
EP2783269A1 (en) 2014-10-01
KR101760804B1 (en) 2017-07-24
KR101617980B1 (en) 2016-05-03
KR20160034430A (en) 2016-03-29
US9557819B2 (en) 2017-01-31
US20160179209A1 (en) 2016-06-23
US11543891B2 (en) 2023-01-03
CN104067201B (en) 2018-02-16
KR20140097433A (en) 2014-08-06

Similar Documents

Publication Publication Date Title
CN104067201A (en) Gesture input with multiple views, displays and physics
CN105653031B (en) Posture input with multiple views, display and physics
CN103119628B (en) Utilize three-dimensional user interface effect on the display of kinetic characteristic
CN108469899B (en) Method of identifying an aiming point or area in a viewing space of a wearable display device
US9952820B2 (en) Augmented reality representations across multiple devices
EP2725457A2 (en) Virtual reality display system
US20160375354A1 (en) Facilitating dynamic game surface adjustment
CN110262666A (en) Augmented reality user interface with touch feedback
KR20180094799A (en) Automatic localized haptics generation system
Rofouei et al. Your phone or mine? Fusing body, touch and device sensing for multi-user device-display interaction
EP2558924B1 (en) Apparatus, method and computer program for user input using a camera
CN110968194A (en) Interactive object driving method, device, equipment and storage medium
US10559131B2 (en) Mediated reality
Lee et al. Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality
CN104137026A (en) Interactive drawing recognition
Jeong et al. mGlove: Enhancing user experience through hand gesture recognition
Steed et al. Behaviour-aware sensor fusion: Continuously inferring the alignment of coordinate systems from user behaviour
RE Low cost augmented reality for industrial problems
CN117991967A (en) Virtual keyboard interaction method, device, equipment, storage medium and program product
Steed et al. Displays and Interaction for Virtual Travel
Laberge Visual tracking for human-computer interaction
KR20200061700A (en) System and method for providing virtual reality content capable of multi-contents
CN109316738A (en) A kind of human-computer interaction game system based on AR
Shibuya et al. Empirical Evaluation of Throwing Method to Move Object for Long Distance in 3D Information Space on Mobile Device
JP2012220986A (en) Display system and load distribution method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant