JP2007531113A - Identification of mobile device tilt and translational components - Google Patents

Identification of mobile device tilt and translational components Download PDF

Info

Publication number
JP2007531113A
JP2007531113A JP2007504983A JP2007504983A JP2007531113A JP 2007531113 A JP2007531113 A JP 2007531113A JP 2007504983 A JP2007504983 A JP 2007504983A JP 2007504983 A JP2007504983 A JP 2007504983A JP 2007531113 A JP2007531113 A JP 2007531113A
Authority
JP
Japan
Prior art keywords
device
axis
motion
gesture
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2007504983A
Other languages
Japanese (ja)
Inventor
トマス アドラー,ビー
エイ ウィルコクス,ブルース
エル マーヴィット,デイヴィッド
エイチ エム ラインハート,アルバート
均 松本
Original Assignee
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US10/807,589 priority Critical patent/US7301529B2/en
Priority to US10/807,568 priority patent/US7180501B2/en
Priority to US10/807,563 priority patent/US7301526B2/en
Priority to US10/807,571 priority patent/US7176887B2/en
Priority to US10/807,572 priority patent/US20050212760A1/en
Priority to US10/807,564 priority patent/US7180500B2/en
Priority to US10/807,569 priority patent/US7301528B2/en
Priority to US10/807,559 priority patent/US7176886B2/en
Priority to US10/807,558 priority patent/US7280096B2/en
Priority to US10/807,567 priority patent/US7365737B2/en
Priority to US10/807,588 priority patent/US7176888B2/en
Priority to US10/807,557 priority patent/US7365735B2/en
Priority to US10/807,561 priority patent/US7903084B2/en
Priority to US10/807,566 priority patent/US7173604B2/en
Priority to US10/807,570 priority patent/US7180502B2/en
Priority to US10/807,560 priority patent/US7365736B2/en
Priority to US10/807,565 priority patent/US7301527B2/en
Priority to US10/807,562 priority patent/US20050212753A1/en
Application filed by 富士通株式会社 filed Critical 富士通株式会社
Priority to PCT/US2005/007409 priority patent/WO2005103863A2/en
Publication of JP2007531113A publication Critical patent/JP2007531113A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 – G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 – G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 – G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1637Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer

Abstract

  The motion control portable device has a first accelerometer that detects acceleration along a first axis, and a second accelerometer that detects acceleration along a second axis. The second axis is perpendicular to the first axis. The apparatus includes a tilt detection element that detects a rotational component around at least one of the first axis and the second axis, and a display that displays a current image. The apparatus includes a motion tracking module that tracks the movement of the apparatus in three dimensions using the first accelerometer, the second accelerometer, and the tilt detection element. The device includes a controller that generates the current image and changes the current image in response to movement of the device.

Description

  The present invention relates generally to portable devices, and more particularly to portable devices with a motion interface.

  The availability of computer devices such as cellular telephones and personal digital assistants (PDAs) is growing rapidly. Such devices provide a number of different functions to the user via different types of interfaces such as keypads and displays. Some computer devices use motion as an interface by detecting the tilt of the device by the user. Examples of motion interfaces include tethering a computer device with a fishing line or carrying a large magnetic tracking device that requires a large amount of power.

  An object of the present invention is to provide a portable device having a motion interface.

  In the present invention, a portable device having a motion interface is used.

  In a particular cell phone, the motion control portable device has a first accelerometer that detects acceleration along a first axis and a second accelerometer that detects acceleration along a second axis. The second axis is perpendicular to the first axis. The apparatus includes a tilt detection element that detects a rotational component around at least one of the first axis and the second axis, and a display that displays a current image. The apparatus includes a motion tracking module that tracks the movement of the apparatus in three dimensions using the first accelerometer, the second accelerometer, and the tilt detection element. The device includes a controller that generates the current image and changes the current image in response to movement of the device.

  The display has a display surface, and the first and second axes are substantially parallel to the display surface. The tilt detection element includes a third accelerometer that detects acceleration along a third axis that is perpendicular to the first axis and also perpendicular to the second axis. The motion tracking module is configured to translate in a plane formed by the first axis and the second axis based on the acceleration measured by the third accelerometer, and to convert the first axis and the second axis. And a rotation having a component around at least one of the above. The tilt detection element includes a third accelerometer that detects acceleration along a third axis that is perpendicular to the first axis and also perpendicular to the second axis. The tilt detection element includes a camera that generates a video stream, and a video analysis module that detects a direction of motion based on the video stream.

  According to another aspect, a method for controlling a portable device includes: detecting an acceleration along a first axis using a first acceleration; and detecting an acceleration along a second axis using a second acceleration. Detecting. The second axis is perpendicular to the first axis. The method includes detecting a rotational component around at least one of the first axis and the second axis using a tilt detection element, the first accelerometer, the second accelerometer, and the Tracking the movement of the device in three dimensions using a tilt detection element. The method includes generating the current image using a display of the device and changing the current image in response to the tracked movement of the device.

  Certain portable technical advantages of the present invention include that the portable device includes a motion sensing element that can identify the tilt and translation of the device and the translation plane. The device can therefore recognize a number of movements that function as input, increasing the functionality of the device. In one form, a number of different types of motion detection elements are combined to allow the manufacturer to design the device with the most appropriate elements depending on the functionality desired for the device.

  Other technical advantages will become more apparent to those skilled in the art from the following drawings, detailed description, and claims. Although specific advantages are listed, various embodiments may include all, some, or none of the listed advantages.

  For a more complete understanding of certain embodiments of the present invention and the advantages thereof, the following description is made in conjunction with the accompanying drawings.

  FIG. 1 shows a portable device 10 with motion interface functionality according to a specific embodiment. The mobile device 10 can recognize the motion (motion) of the mobile device and can execute various functions corresponding to such motion. And the motion of the device functions to make the input of the device. Such motion input may directly change what is displayed on the display of the device, or may perform other functions. The portable device 10 is a mobile phone, personal digital assistant (PDA), still camera, video camera, portable computer, portable radio or other music or video player, digital thermometer, game device, portable electronic device, wristwatch, etc. Or any other device that the user can carry or wear. As listed above, portable device 10 may also include a wearable portable device such as a wristwatch. The wristwatch may include any computer device carried around the user's wrist.

  The portable device 10 includes a display 12, an input 14, a processor 16, a memory 18, a computer interface 20 and a motion detector 22. The display 12 may be composed of a liquid crystal display (LCD), a light emitting diode (LED), or any other type of device that provides visual output of the device and notifies the user of the output. Input 14 provides a user interface for notifying input to the device. Input 14 may consist of a keyboard, keypad, trackball, knob, touchpad, stencil, or any other element that allows the user to notify device 10 of input. In particular embodiments, display 12 and input 14 may be combined into the same element, such as a touch screen.

  The processor 16 may be a microprocessor, controller or any other suitable computer device or resource. The processor 16 is configured to execute various types of computer instructions in various computer languages for performing functions available within the system of the mobile device 10. The processor 16 may include any suitable controller that controls the management and operation of the portable device 10.

  The memory 18 may be any suitable including, but not limited to, magnetic media, optical media, random access memory (RAM), read only memory (ROM), removable media, or any other suitable local or remote memory element. It may be a form of volatile or non-volatile memory. Memory 18 includes software, logic modules or elements that are executable by processor 16. Memory 18 may include various applications 19 with user interfaces that use motion input, such as mapping, calendar and file management applications as further described below. Also, as will be described later, the memory 18 includes various databases such as a gesture database and a function or gesture mapping database. The components of memory 18 may be combined and / or partitioned to perform processing according to specific needs and requirements within the scope of the present invention. The communication interface 20 supports wireless or wired communication of data and information with other devices such as other portable devices and devices.

  The motion detector 22 tracks the movement of the mobile device 10 that is used as a form of input to perform a function. Such input motion occurs as the user moves the device in the desired form to perform the desired task as described further below.

  The mobile device 10 according to certain embodiments may include any suitable processing module and / or memory module (eg, control module, motion tracking module, video analysis module, motion response module, display, etc.) that performs the functions as described herein. It should be understood that a control module and a signature detection module may be included.

  In certain embodiments, the input movement is in the form of translation and / or gestures. Translation-based input focuses on the start and end points of motion and the difference between such start and end points. Gesture-based input focuses on the actual path through the device and is an overview of the point cloud that has crossed. As an example, when navigating a map using translation-based input, the movement in the form of “O” changes the display during movement, but the information displayed before moving and the information displayed after moving Eventually there is no change between them (because the device seems to be at the same point as the starting point when the movement is finished). However, in the gesture input mode, the device will recognize that it has passed the “O” shape (eg, even if the start and end points are the same). This is because in the case of gesture-based input, the device focuses on the path taken between the start and end points of the gesture during its movement or movement. The movement of this gesture “O” is such that when the device recognizes that it has moved along the path comprising the “O” gesture, it performs a certain function as described in detail below. It may be associated with a specific function. In a particular embodiment, the movement of the device intended as an intention representation (gesture) is to match what defines the gesture in the gesture database with a series of continuous or pattern accelerating movements. Thus, it may be recognized by the device in the same way as a gesture.

  A portable device according to other embodiments may not include certain elements of the device shown in FIG. For example, in some embodiments, a portable device 10 without a motion detector and a separate input 14 is provided so that the movement of the device provides a single or primary input for the device. It should be understood that a portable device according to other embodiments may include additional elements not specifically shown with respect to device 10.

  FIG. 2 illustrates the motion detector 22 of FIG. 1 according to a particular embodiment of the present invention. In this example, the motion detector 22 includes accelerometers 24a, 24b, 24c, cameras 26a, 26b, 26c, gyros 28a, 28b, 28c, range finders 30a, 30b, 30c, and a processor 32.

  The accelerometers 24a, 24b, 24c detect the motion of the device by detecting the acceleration along their detection axes. The specific movement of the device may include a series of continuous or pattern accelerations detected by an accelerometer. When the portable device is tilted along the detection axis of a specific acceleration system, the gravitational acceleration along the detection axis changes. This change in gravitational acceleration is detected by an accelerometer and reflected in the tilt of the device. Similarly, translation of the portable device or movement of the device without rotation or tilt will produce a change in acceleration along the detection axis detected by the accelerometer.

  In the illustrated example, the accelerometer 24a constitutes an X-axis accelerometer that detects the movement of the device along the X axis, and the accelerometer 24b constitutes a Y-axis accelerometer that detects the movement of the device along the Y axis. The total 24c constitutes a Z-axis accelerometer that detects the movement of the device along the Z-axis. When combined, the accelerometers 24a, 24b, 24c can detect the rotation and translation of the device 10. As described above, rotation and / or translation of the device 10 may function as input from a user operating the device.

  The use of three accelerometers for motion detection provides certain advantages. For example, if only two accelerometers were used, the motion detector could not detect a clear (unambiguous) translation of the portable device due to tilt in the translation plane. However, the use of a third Z-axis accelerometer (an accelerometer having a detection axis that is at least approximately perpendicular to the detection axes of the other two accelerometers) can clarify most tilts from most translations. To.

  It should be understood that there may be some inherent motion that cannot be recognized by the accelerometers 24a, 24b, 24c. For example, a motion with a certain rotation and a certain translation may appear to the accelerometers 24a, 24b, 24c to be the same as another motion with another specific rotation and another specific translation. If the motion detector 22 simply included three accelerometers (without providing any additional elements that ensure higher accuracy) for motion detection, certain unrecognizable movements would have the same function May not be mapped to a function to avoid confusion.

  As described above, the motion detector 22 includes cameras 26a, 26b, 26c, which may be comprised of a charge coupled device (CCD) camera or other optical sensor. The cameras 26a, 26b, 26c provide another way to detect the movement of the portable device (both tilt and translation). If only one camera is provided in the device for motion detection, the tilt of the device may be indistinguishable from translation (unless other motion detection elements such as accelerometers are used). However, by using at least two cameras, tilt and translation can be distinguished from each other. For example, if two cameras were provided on the portable device 10 (one on the top of the device and one on the bottom of the device), each camera would see a moving world as the device translated to the left. Will. When the device is lying horizontally, when the device is turned by lifting the left end and lowering the right end, the bottom camera senses the world moving to the right and the top camera senses the world moving to the left. Thus, if the device translates, the facing camera will see a world moving in the same direction (in this example, a world moving to the left). When the device is turned, the facing camera will see the world moving in the opposite direction. This deductive process is also true. If you see a world where both cameras move in the same direction, the motion detector knows that the device is being translated. If you see a world where both cameras move in the opposite direction, the motion detector knows that the device is turning.

  When the device is turned, the amount of world movement relative to the camera is directly related to the amount of rotation of the device. Thus, the amount of rotation can be accurately determined based on the movement of the world relative to the camera. However, if the device has been translated, the magnitude of the translation is related to both the magnitude of the world movement relative to the camera and the distance to the object in the camera's field of view. Thus, in order to accurately determine the amount of translation using only the camera, some form of information regarding the distance to the object in the camera's field of view must be obtained. However, in some embodiments, a camera with a distance measuring function is used.

  It should be understood that even in the absence of distance information, optical information can be an important value when associated with information from accelerometers or other sensors. For example, an optical camera input may be used to notify the device that no significant movement has occurred. This provides a solution to the drift problem, which is an inherent problem when using acceleration data to determine absolute position information for certain device functions.

  As mentioned above, distance information is useful for determining the amount of translation when a camera is used to detect motion. In the illustrated example, such distance information is prepared by the distance measuring devices 30a, 30b, and 30c. The distance measuring devices 30a, 30b, and 30c may be constituted by any suitable distance measuring element such as an ultrasonic distance measuring device, a laser distance measuring device, or the like. Other elements may be used to determine distance information. For example, a camera having a distance measuring function may be used, or a plurality of cameras may be used on the same side of the apparatus so as to function as a distance measuring unit using binocular stereoscopic images (stereopsis). The determined distance information allows an accurate and explicit calculation of any explicit translation component due to translation and component due to rotation.

  As described above, the motion detector 22 additionally includes gyros 28a, 28b, 28c. The gyros 28a, 28b, 28c are used in combination with other elements of the motion detector 22 to increase the accuracy of detecting the movement of the device.

  The processor 32 processes the data of the accelerometer 24, camera 26, gyro 28 and distance measurer 30 and generates an output indicative of the movement of the device 10. The processor 32 may comprise a microprocessor, controller, or any other suitable computer device or resource (eg, a video analysis module that receives a video stream from each camera). In some embodiments, the processing described herein in connection with processor 32 of motion detector 22 may be performed by processor 16 of mobile device 10 or any other suitable processor (a processor located remotely from that device). May be executed.

  As described above, the motion detector 22 includes three accelerometers, three cameras, three gyros, and three distance measuring devices. Motion detectors according to other embodiments may include fewer or different elements than those of this motion detector 22. For example, some embodiments include a motion detector with three accelerometers but no camera, gyro or distance measurer; a motion detector with two or three accelerometers and one or more gyros. A motion detector with two or three accelerometers and one or more cameras; or a motion detector with two or three accelerometers and one or more distance meters Good. Further, the arrangement of motion detection elements in the device may be different in various embodiments. For example, one embodiment may include a camera on a different surface than the device, while another embodiment may include two cameras on the same surface (eg, to add distance measurement functionality).

  Changing the type, number and position of the elements of the motion detector 22 may affect the ability of the motion detector to detect or accurately measure various types of movement. As mentioned above, the type and number of motion detector elements may vary in various embodiments to meet specific needs. In certain embodiments where it is desired to sacrifice accuracy to reduce the manufacturing cost of a portable device with motion detection capabilities, fewer or less accurate elements may be used. For example, a portable device may only need to detect that the device is translated and not need to detect the exact amount of such translation to perform the desired function of the device. Absent. Such portable devices may include some kind of distance measuring device that provides distance information and a motion detector that does not include other elements but includes a camera. In certain embodiments, the above-described elements, such as cameras and distance meters, may be used in the device for other purposes besides those described above related to motion detection functions.

  FIG. 3 is a diagram showing an example of use of the motion detection element of the mobile device 10 of FIG. Unprocessed data from the motion detection element is processed by the processor 32. Such raw data includes the raw data 23a for the X-axis accelerometer, the raw data 23b for the Y-axis accelerometer, and the raw data 23c for the Z-axis accelerometer from each of the accelerometers 24a, 24b, 24c; , 26b, 26c, camera unprocessed data 25a, camera unprocessed data 25b, and camera unprocessed data 25c; gyros 28a, 28b, 28c, gyro unprocessed data 27a, gyro unprocessed data 27b, and gyro unprocessed data, respectively. 27c; and distance measurer raw data 29a, distance measurer raw data 29b, and distance measurer raw data 29c from distance measurers 30a, 30b, and 30c, respectively. If the mobile device includes a greater number, a smaller number, or another motion detection element, as in some embodiments, the raw data may correspond to the included element.

  The raw data is processed by the processor 32 to produce a motion detector output 34 that identifies the movement of the device 10. In the illustrated example, the motion detector output 34 includes translation along the x, y, z axis and rotation about the x, y, z axis. The motion detector output is communicated to the processor 16 of the portable device, which determines the operation, function or task (ie, device behavior 36) that the device should perform based on the motion of the device. The execution of certain actions, functions or tasks based on specific movements will be further described later.

  FIG. 4 is an isometric view of a portable device 31 having a motion detection function according to a specific embodiment. The portable device 31 includes an x-axis accelerometer 33, a y-axis accelerometer 35, and a camera 37 facing the z-axis. X-axis 38, Y-axis 39 and Z-axis 40 are also shown with respect to device 31 as a reference. The portable device 31 uses the accelerometers 33 and 35 and the camera 37 to detect movement including tilt and translation in various directions. Portable device 31 may include other elements, such as those shown and described with respect to portable device 10, such as display 12, input 14, processor 16, memory 18, and communication interface 20. As mentioned above, certain embodiments may be used to make portable devices with various types of motion sensing elements (including accelerometers, gyros, cameras, distance meters or other suitable elements) suitable for that apparatus. In any combination and position or orientation.

  In certain embodiments, the user interface function may utilize an input motion along a certain axis of motion at some point in time. For example, a device application may allow a user to scroll through a list displayed on a mobile device by moving the device along a particular axis (eg, in one direction or bi-directionally). . It may be difficult for the user to limit the movement of the device to the specific axis desired. In other words, any rotation or movement about another axis created by the user may be difficult to avoid. To address this issue, the apparatus may include functions to select preferred motions (functions including dominant motion selection and amplification, and motion minimization in other directions or axes).

  FIG. 5 shows the dominant motion selection and amplification as described above and the motion minimization in other directions. In the illustrated example, “actual exercise 41” represents the movement of the mobile device. The actual motion 41 has a motion component 42 along one axis 44 and a motion component 46 along another axis 48 perpendicular to the axis 44. Since the amount of motion 42 is greater than the amount of motion 46, the portable device selects motion 42 as the dominant motion. The portable device then amplifies this dominant motion and minimizes motion 46 (the other motion component) so that the actual motion 41 is processed by the device as depicted as motion 50. The degree or size of dominant motion amplification may vary in various embodiments depending on particular factors such as the particular application currently operating on the device. Further dominant motion amplification is the magnitude of acceleration, speed of motion, the ratio of motion in one direction (eg motion 46) to another direction (eg motion 42), the underlying desktop being navigated. It may be based on the size of the user or the preference of the user. In some embodiments, the mobile device performs a preferred motion selection only when certain motion attributes occur. For example, in some cases, the mobile device may select and amplify the dominant movement if the movement of one axis is more than three times more than some other movement. Other smaller movements may be minimized.

  Dominant movement selection and amplification and other movement minimization further extend the user's capabilities to take advantage of the motion user interface, and applications running on mobile devices or devices may be unwanted noise. Allows the user to filter the noise that is triggered. With this feature, the user may be able to scroll the list by moving the device to the left to retrieve the list to be examined and moving it up and down, for example. Inappropriate movement along the axis may be ignored or significantly reduced by the device.

  In certain embodiments, dominant motion selection and amplification and other motion minimization may also be applied to the rotational motion of the device. The dominant motion about an axis may be selected and amplified in a manner similar to that for motion along an axis as described above for translational motion. Further, rotation about another axis (not the dominant rotation) may be minimized.

  FIG. 6 shows a flowchart for preferred motion selection according to a particular embodiment of the present invention. In the flowchart, raw data corresponding to the movement of the mobile device is received. In the illustrated example, the raw motion data includes x-acceleration data 62a, y-acceleration data 62b, and z-acceleration data 62c and is processed at step 64 to produce an output indicative of device movement. Other embodiments include other types of motion raw data such as optical or camera data, gyro data and / or distance measurement data. After processing the raw acceleration data 62, the dominant axis of motion is selected at step 66. If the dominant axis selected for motion was the x-axis, the motion along the x-axis is increased at step 68a. If the dominant axis selected for motion is the y-axis, the motion along the y-axis is increased in step 68b. If the dominant axis selected for motion was the z-axis, the motion along the z-axis is increased at step 68c. The amount of motion increase in the dominant axis with respect to motion may vary in various embodiments according to the application used or other characteristics. In some embodiments, user preferences 69 may be used to determine the amount or type of exercise. Motion along axes other than the dominant axis of motion is minimized as described above, and such motion is ignored in use in certain applications. In step 70 the increased motion is processed resulting in device behavior 72. This processing step may include accessing the application being used and determining the behavior of the particular device to perform based on the increased movement. The amplified movement may result in any of a number of types of device operations depending on the application used, the particular user or others.

  For a particular user interface that uses motion input, it may be worth having the information displayed on the display 12 of the mobile device 10, or the location of the virtual display, relative to the location of the device. For example, in certain embodiments that use translational inputs such as traveling on a map displayed on the device, the location of the mobile device may directly determine the portion of the map displayed on the display 12. However, the availability of many tasks such as maps or menu navigation may be diminished if the device location information is absolutely maintained (eg, Global Positioning Satellite (GPS) based system). Thus, it is convenient in some situations to determine a “zero point” (reference point) or origin in a local situation, such as the zero point may be used to determine the behavior of the device. For example, if the zero point is determined when the device is at point A, the motion between point A and point B is used as input. In particular, useful applications for setting the zero point include external behavior such as moving the virtual display or placing the application in the space around the user's body. Setting the zero point addresses internal behavior such as instructing the device to ignore the gravitational acceleration in the current orientation, allowing the device to operate only in addition and possibly with respect to acceleration caused by the user.

  A mobile device according to certain embodiments may include an application user interface that utilizes motion input only a certain number of times. At other times, for example, the movement of the device is not used as an input, which may be useful to “turn off” or turn off the motion detection function or motion detection function of the device. Deactivating the motion sensitivity function may include, for example, deactivating the motion detector 22 of the device 10 or other element (eg, a motion response module of the device). Particular embodiments allow for the selection of participation and non-participation of the motion detection function of the device.

  As an example, a motion response module that modifies display 12 based on motion detected by motion detector 22 has one mode of operation that switches to another mode of operation in which motion detection is enabled. This is an operation mode for waiting for a trigger to be performed. If motion detection is not enabled, any movement of the device is ignored. The trigger may set a zero point for the device. Once the zero point is set, the motion response module may measure the baseline orientation of the device based on measurements from the motion detection element. The baseline orientation (baseline direction) may include the position of the device when the trigger is received (determined by information from the motion detection element). Further movements of the device are compared to the baseline direction to determine the function or modification to perform (modifications to be made to the display 12 based on user movement with respect to the device).

  Certain embodiments may provide any number of user-initiated actions that act as single triggers for selective participation / non-participation in device motion detection and zero point selection. Such actions may include, for example, a key press on input 14, movement of device 10 in a particular direction (eg, movement corresponding to a particular gesture) and tapping display 12 (tapping the display). It should be understood that user-initiated actions may set the zero point and at the same time use the motion detection function of the device.

  In some embodiments, the zero point and the involvement or non-participation of the motion detection function may be set for periods of inactivity or periods of minimal activity (i.e., periods of relatively static conditions). FIG. 7 shows a flowchart 80 for passively setting the zero point of the portable device. In step 82a, an acceleration change related to the x axis is detected, in step 82b, an acceleration change related to the y axis is detected, and in step 82c, an acceleration change related to the z axis is detected. In steps 84a, 84b, 84c, it is checked whether any detected acceleration change is greater than a particular respective threshold. If the detected acceleration change for each of the three axes was not greater than the set threshold, the device may be considered stationary and a zero point is set at step 86a. The rest position may be determined, for example, by stabilization of motion components or raw data of elements of the motion detector 22. However, if the acceleration change detected along any of the three axes is greater than the set threshold, the process returns to step 82 acceleration change detection. Thus, the present method of passively setting the zero point ensures that the zero point is set when the portable device is stationary. In addition, if the device is in constant motion but has not been moved by the user at a certain time (for example, if it is stationary in a train moving at a constant speed) Since no acceleration is detected, a zero point may be set. Utilizing a threshold to determine whether the acceleration change is large enough to provide a trigger to set the zero point means that the user holding the device stationary will set the zero point passively. enable. Otherwise this becomes difficult (passive zeroing). This is because a device with a very sensitive accelerometer may detect acceleration changes resulting from very minor movements caused by the user. It should be understood that a similar method may be used for motion detectors that include elements other than accelerometers. A threshold may be used in such a similar way to deal with small unintended movements (otherwise the movements may hinder the setting of the zero point).

  Particular embodiments of the present invention include the ability to selectively enable and disable the motion detection function of a portable device repeatedly, and use virtual motion in a limited physical space. Allows large movements on the desktop (or information space). This process is similar to “scrubbing” with the mouse that controls the cursor, or lifting the mouse off the surface and repositioning it elsewhere on the surface, allowing greater movement of the cursor is doing. Lifting the mouse breaks the connection between the mouse movement and the cursor movement. Similarly, the connection between the movement of a portable device, such as device 10, and the operation, function or action can be enabled and disabled based on the movement of the device.

  FIG. 8 shows an example of using a scrubbing function for navigation in a virtual desktop or information space larger than the display of the portable device. In the illustrated example, the mobile device is used to navigate through the virtual desktop 90. The virtual desktop 90 is shown as a grid map (grid map) and may represent any suitable information that the user desires to navigate. The virtual desktop information displayed on the portable device is represented by a box 92. In this example, the translation movement of the mobile device is used to navigate through the virtual desktop 90. For example, to navigate from right to left through information in the virtual desktop 90, the user moves the mobile device from right to left. Although the illustrated example shows moving the device to the right to perform a scrubbing process, it is understood that the portable device of a particular embodiment may be moved in any manner suitable for performing the scrubbing process. Should.

  As described above, box 92 shows information of virtual desktop 90 currently displayed on the device. If the user wishes to see the information represented in box 94, the user can move the portable device from left to right. Regarding the present embodiment, when the user moves the device to the right, information on the virtual desktop 90 included in the box 94 is displayed on the device. Also, when the user's arm is fully extended to the right side of the user, the user must walk or move further to the right to see what is on the right side of the box 94 on the device information display of the virtual desktop 90. There must be. In such a case, if the user is unable or unwilling to walk further to the right or move so that the device is moved further to the right, the user can selectively use the motion detection function of the portable device. , Move the device to the left, selectively re-enable the motion detection function of the device, and move the device to the right to display the information on the right side of box 94. In this approach, the user can display the virtual desktop 90 information contained in box 96 and the process is repeated to display the information contained in box 98 on the right side of box 96. .

  To enable greater motion within the virtual desktop within a limited physical space, selecting the device motion detection feature to be disabled and re-enabled is possible (by key on device input). It can be performed in any of a variety of ways, such as by moving the device according to a particular gesture or movement (eg, an arcuate movement) or by tapping (tapping) the device screen. . Any other action that the user takes may be used to disable and enable the motion detection function for this purpose. Certain embodiments allow multiple actions to disable and enable the motion detection function of the device. Further, the user action that disables the motion detection function of the device may be different from the user action that enables the motion detection function. This scrubbing process may be performed in any suitable application such as map navigation, menu navigation and list scrolling.

  FIG. 9 is a flowchart illustrating the steps of the scrubbing process associated with FIG. 8 above according to a particular embodiment. The flowchart begins at step 100 where the portable device is moved to the right to advance from the information displayed in box 92 of virtual display 90 to the information in box 94. As mentioned above, the user may wish to further display the information on the right side of box 94, but the physical space for moving the device further to the right may be exhausted. Thus, in step 102, the user disables the motion detection function of the device. To perform such non-use, any suitable user action may be taken, such as pressing a button on the device or moving the device according to a specific gesture. In step 104, the user moves the device to the left, allowing the user to prepare more physical space, and when the motion detection function is available again, the user moves the device to the right through the physical space. move.

  In step 106, the user enables the motion detection function of the device. Again, such enabling may be performed by any appropriate user action, which is different from the user action performed when attempting to attach motion detection in step 102. Also good. Since the motion detection function has been enabled, in step 108 the user moves the device to the right and changes the information displayed on the device from the information in box 94 to the information in box 96. In step 110, it is confirmed whether it is necessary to move the apparatus further to the right. If further movement is required (eg, to display information in box 98 of virtual display 90), the process returns to step 102 and the motion detection function of the device is again disabled. If no further action is required, the process ends. As described above, the scrubbing process may be used in any suitable application of the device that supports motion input, and the device may be moved in any suitable manner to use its functionality.

  As noted above, certain movements of the device (eg, certain gestures) may be used in the scrubbing process to notify the device that it will not change the information shown on the display during such movement. Good. This allows the user to return the device to a position (the user moves the device from that position to further change the information shown on the display). For example, the device is at a certain reference reference position and the movement of the device from that position changes the display information. A certain predetermined motion (e.g., arcuate motion) may be used to notify the device not to change the display information in response to the motion until the motion is complete. When the predetermined movement is completed, the reference reference position may be reset and further movement of the device from the reference reference position may change the display information. The reference reference position identifies the baseline direction of the device represented by the baseline component of the motion data received by the motion detection element of the device. In a particular embodiment, a gesture as determined by movement from a reference reference position is received and a particular command is executed that changes the information displayed on the device.

  As described with respect to the various embodiments described above, a portable device according to a particular embodiment may operate the device using multiple types or modes of input. Such input modes include motion input modes such as translation input mode and gesture input mode. Multiple input modes may often be used in combination with each other, and in some cases the mobile device may be set to recognize one mode type at a time. In certain situations, the mobile device may be configured for functions based on multiple types of non-motion input and only one type of motion input (eg, translation and gesture) at a particular point in time. .

  To facilitate this flexibility of a portable device that recognizes multiple input modes, in certain embodiments, a trigger is used to switch between input modes. For example, the user presses a specific key or moves the device in some way (eg, with a specific gesture) to switch between input modes. In examples where a device application recognizes and functions based on multiple types of motion input, a specific key is pressed or a specific gesture is utilized by the device to switch between translational motion input mode and gesture input mode. It is formed. The trigger may include simple switching from one application to another, or switching from one display image to another. In some cases, the fixed input mode and the motion input mode may be switched by a certain trigger. Any particular user action may be used as a trigger to switch between different input modes, such as between different motion input modes. In some embodiments, voice commands or physical actions related to the device (eg, tap the device or screen) may be used to switch the input mode.

  In certain embodiments, user actions that enable the device's motion detection capabilities may include other information that may otherwise affect the behavior of the device. For example, if a user makes a motion that enables translational motion detection, it will cause the device to move relative to the motion, rather than if the user takes another motion to enable motion detection. And high sensitivity. The enabling movement may include gestures that indicate the user's identity or context, and thereby engage in various operational settings such as the user's preferences.

As mentioned above, certain embodiments include the ability to receive motion inputs to control various functions, tasks and operations of the mobile device and are used to change the information displayed on the device in the process. May be. In some cases, such a motion input may be in a gesture format, unlike a simple translation format input. Gesture input may be used to navigate through a multi-dimensional menu or grid of an application. For example, as described above with respect to the scrubbing process, the display of the mobile device may be less (smaller) than the amount of information that can be displayed on the display (eg, menu options, map information). This leads to a narrow and deep menu structure. In many cases, a wide and shallow menu structure may be preferred over a narrow and deep menu structure. Because
This is because the user does not have to remember much information about the location where the function is located.

  FIG. 10A illustrates an example navigation menu using gesture input according to a particular embodiment. In the illustrated example, the mobile device is used to navigate through the virtual desktop 120. The virtual desktop 120 includes a menu tree having a menu category 122 for selection. Each menu category 122 may include its own secondary category (subcategory) for selection. In one embodiment, menu category 122 includes a category of functions, and each sub-category of menu selection includes the actual functions under each such category. In another example, the menu category consists of nouns (eg, “folder”, “document”, “image”) and the subcategories are verbs (eg, “move”, “paste”, “cut”). Consists of. If the portable device constitutes a cellular phone, the menu category 122 may include “call”, “phone book”, “message”, “planner”, “sound”, “setting” and other items. . Each menu category 122 may include functions that may be accessed when the menu category 122 is selected. Although two menu levels are shown in FIG. 10A, it is understood that a multi-dimensional desktop or information display for motion interface navigation may include any number of levels (eg, menus) while using any number of levels. Should be.

  In the illustrated example, the menu category 122e is selected, and the subcategory 124 of the menu category 122e is displayed so that it can be selected. Boxes 126 and 128 represent information displayed to the user on the mobile layer. As shown, the virtual desktop 120 contains more information or menus than can be displayed on the device at one time. The user moves the device according to the gesture of gesture and navigates through or through the virtual desktop. Gestures may be used to navigate through various menu levels and to make menu selections. As an example, the user may move the device 10 clockwise (130) and advance it to the right on the virtual desktop 120 by a predetermined amount (eg, move from information in box 126 to information in box 128). A particular menu category 122 may be selected by a leave gesture (132) or by a down gesture (eg, to select menu category 122e) and display a sub-category 124 for selection. Similarly, moving up from the virtual desktop 120, the user may move the device 10 counterclockwise (134). In some cases, navigation may be through four gestures: forward gesture, backward gesture, left gesture, and right gesture. In some embodiments, gestures with vertical motion vectors may be used for navigation.

  In certain embodiments, gestures that are mirror images of other used gestures may be used to perform the opposite function to that performed by other gestures. For example, the motion toward the user may be zoom (enlargement), and the reverse motion that is a movement away from the user may be zoomed out (reduction). The mirror image or reverse gesture associated with the reverse function forms a device motion user interface that is easy to learn and use.

  In some cases, the menu item in the center of the display may be highlighted for selection, and in other cases certain gestures will select the menu option that the user wishes to select from among multiple displayed selections. Show. It should be understood that menus and other information that a user navigates using gestures may be displayed on the mobile device in any of a variety of ways. In some embodiments, only one level information is displayed at a time (ie, one menu level) and sub-levels or higher levels are not displayed until they are made available by selection.

  FIG. 10B illustrates example gestures used to perform various functions, such as functions that allow a user to navigate through a virtual desktop. The gestures in the illustrated example include an “up” gesture 133 for moving upward on the desktop, a “down” gesture 135 for moving downward, and a “left” gesture for moving left. 136, a “right” gesture 137 to go right, an “in” gesture 138 to go in the direction toward the user, and an “out” gesture 139 to go away from the user Including. These are just examples of gestures and commands relating to a particular embodiment, and other embodiments are associated with various commands for navigating through the desktop or performing certain functions on a mobile device. It should be understood that similar or different gestures may be included.

  FIG. 11 illustrates an example navigation map using motion input according to a particular embodiment of the present invention. FIG. 11 includes a virtual desktop 140 representing an information grid divided into 16 parts, each part being referred to by a letter (A, B, C,..., P). Some virtual desktops are identified using reference characters only to describe a particular embodiment, and portions of virtual desktops according to other embodiments are distinguished in the device application by reference characters or otherwise. It does not have to be. The virtual desktop 140 contains more information than can be displayed at one time on a particular mobile device. The virtual desktop 140 may represent any appropriate information, which is information that the user desires to navigate using the mobile device, such as a street map. The user desires to navigate through the virtual desktop 140 to display various portions of information on the mobile device, and within the virtual desktop 140 to see certain portions of the information on the virtual desktop 140 more clearly. You may want to zoom (and zoom out)-that is, change the granularity of the information displayed.

  In the illustrated example, the box 142 represents information currently displayed on the mobile device 10. Box 142 includes portions A, B, E, and F of virtual desktop 140. In a particular embodiment, if the user wishes to change the information on the desktop 140 displayed on the device to the information in boxes C, D, G, H, for example, the user uses the motion input to switch the device display. Move the box 142 to be represented to the right by the required amount (in the example shown, move two parts to the right). Such motion input can be translational input (moving the mobile device 10 to the right by an applicable amount and changing the displayed information) or gesture input (moving the mobile device according to a specific gesture associated with a function). )including. As an example, one gesture may be associated with moving the display to the right by one part, and another gesture may be associated with moving the display to the right by only two parts. Thus, the user can navigate the desktop 140 using translation input or gesture input.

  For example, through translation input or gesture input, the mobile device 10 allows the user to zoom in on the displayed information (to see such information more clearly). As in the example using gesture input, if the information displayed on the device included 4 out of 16 parts (eg, box 142 displaying parts A, D, E, F), the user One of the four gestures may be used to zoom in on one of the four displayed parts (each of the four gestures is associated with a zoom in on one part). If the user moves the portable device according to the gesture associated with zooming in on part B, the device is displayed by box 144 (parts B1, B2, B3, B4, B5, B6, B7, B8, B9). Information may be displayed (box 144 collectively forms part B information of virtual desktop 140 in an enlarged view). Thus, the information of the part B is displayed larger and clearer. When viewing the information in box 144 on the device, the user may further enlarge or reduce a particular portion that is currently displayed using appropriately associated gestures. If the user device moves the portable device according to a gesture associated with enlarging portion B2, (the gesture was used to enlarge portion B when the information in box 142 was displayed) The device may display the information in box 146 (parts B2a, B2b, B2c, B2d, B2e, B2f, B2g, B2h, B2i). The user may navigate through the virtual desktop when a particular part is magnified. For example, when zooming in on part B (displaying information in box 144), the user may move (device) through the virtual desktop using translation or gesture input to see an enlarged view of parts other than part B . For example, when viewing the information in box 144, the user may perform a gesture to move the displayed information to the right so that the entire display shows only the information for portion C of virtual desktop 140 (ie, the portion (Expand to portion C showing C1, C2, C3, C4, C5, C6, C7, C8, C9). It should be understood that the user may navigate through the virtual desktop 140 information (using both progress and zoom in / out) in any suitable manner using motion input.

  As mentioned above, any gesture suitable for navigating through a virtual desktop (or through a particular level) and navigating between or through different levels or dimensions of a multi-dimensional desktop is used. Also good. Furthermore, gesture-like movements in some embodiments may be used for navigation through a multi-dimensional desktop, and immobile actions may be used to select or navigate between dimensions. Such immobile actions include device input by key presses. Thus, a combination of moving and stationary actions may be used for multi-dimensional virtual desktop or menu navigation in certain embodiments.

  Particular embodiments allow gesture-style navigation through any suitable application, such as a multidimensional grid, menu, calendar, or other hierarchical application. For example, in a calendar application, some gestures are used to navigate within one level, such as the month, and other gestures are between levels, such as between year, month, day, time and event. May be used to navigate. Furthermore, various applications executed on a portable device that uses such gesture navigation may use various gestures. Thus, specific navigation gestures may vary depending on the specific application used. In some embodiments, rather than simply utilizing gesture-based motion, a translation-based interface may be used for navigation through multi-dimensional information of the virtual desktop. For example, motion along the x and y axes may be used to travel within a level of a hierarchy and motion along the z axis may be used to travel between hierarchical levels.

  Other examples include the use of facility phone books, alphabet letters, names, contact details (eg, office, cellular and home phone numbers, emails, etc.) and actions (actions). Includes initiating contact for all levels. In this example, the hierarchy may include information (nouns) and actions (actions). Some map this example to only two axes, for example the y-axis is used for selection within a level of the hierarchy, and the x-axis is used to move between levels. The z-axis is used to confirm the action and helps prevent improper execution of the action.

  In some cases, the number of levels to traverse may depend on the magnitude of movement, particularly for translation-based navigation. Moving the device a little may advance one level at a time, and moving the device a lot may advance multiple levels at a time. The greater the magnitude of the exercise, the more levels are advanced at once. Different gestures may be used to navigate different number of levels in the hierarchy at a time, as applied to gesture-based motion input. These gestures may be the same movement, different sizes, or completely different movements.

  Utilizing motion interface navigation via a multi-dimensional desktop or information display allows the menu to be flattened. This is because the user can easily navigate through specific menus or dimensions of the virtual desktop that are too large to fit the device display. As a result of menu flattening, the user is required to store less information and enhances the functionality and capabilities of the device for the user.

  As described above, in certain embodiments, the mobile device allows a user to navigate through a virtual desktop using motion input. In some cases, the user may navigate over information displayed on the portable device using a cursor. For example, certain information may be displayed on the device, and the user may use a motion input to move a cursor near the device display to select a particular item being displayed and perform a certain function. In some cases, motion input may be used to move the cursor, and immobile actions (such as button presses) may be used to select the item currently indicated by the cursor. It should be understood that both gestures and translational input may be used in various examples of cursor navigation.

  In certain embodiments, the information displayed is fixed with respect to the device, the cursor remains fixed in space, and device movement may be used to navigate the cursor relative to that information. FIG. 12A shows the use of this type of motion input cursor navigation. A display 147 shows a display of the portable device. To illustrate this cursor navigation example, the display is divided into grids to show the information displayed. The grid includes portions AP. Display 147 includes a cursor 148 between portions C, D, G, and H. As described above, in this example, the displayed information is fixed with respect to the device when the device is moved, and the cursor is spatially fixed. However, the (relative) position of the cursor relative to the displayed information changes according to the motion input. When the device is translated to the right, following the right motion 149, the cursor is advanced by a movement opposite to that of the device.

  Display 150 shows a possible display after the device has been moved according to right movement 149, and now cursor 148 is between portions A, B, E, F. Since this example is translation-based input, the amount of movement of the device (in this example, the amount of movement to the right) directly affects the amount of cursor movement for the displayed information. Should be understood. After the portable device is moved according to movement 151, display 152 shows another display and the cursor is between portions I, J, M, N. Obviously, the cursor is fixed in space, so it moves downward (relatively) with respect to the information displayed. After the portable device is moved according to the left motion 153, the display 154 is further shown and the cursor 148 is now between K, L, O, P. Obviously, the cursor is moving (relatively) to the right with respect to the display information. Thus, in this type of cursor navigation, the movement of the device changes the position of the cursor in that information. In this example scheme, instead of using a stylus, the portable device is moved to point to a specific location of the information to be displayed.

  At any point in the cursor navigation process, the user may utilize any form of input (eg, gesture, key press, etc.) to select or perform a function according to the information currently indicated by the cursor. For example, with respect to display 152, the user may zoom in, select, or otherwise based on information between portions I, J, M, and N currently indicated by cursor 148 using a specific gesture or pressing a button. The functions may be executed.

  As described above with respect to FIG. 12A, certain embodiments translate the cursor in a direction of motion that is opposite to the motion of the device and move the position of the cursor relative to the displayed information. In one embodiment, the input motion of the device is broken down into motion components along each of the three axes, two of which are parallel to the display of the device (eg, x-axis and y-axis). Although the movement of the device in the x-axis and y-axis planes is displayed in the device based on such movement, the cursor moves simultaneously according to the translation vector in the opposite direction to the combined movement in the x-axis and y-axis directions. It is moved to substantially maintain the position of the cursor in space. In some cases, if the cursor is to move past the edge of the display according to its translation vector, the vector is reduced to maintain the cursor in the display. Such reduction includes reducing one or more components of the translation vector to maintain the cursor within a predetermined distance from the display edge.

  The division of the displayed information into the parts AP is made only for the purpose of illustrating and describing the above-described embodiment, and the information displayed on the mobile device of a particular embodiment is such a division or other type of reference. It should be understood that the information need not be included.

  FIG. 12B illustrates another form of motion input cursor navigation according to certain embodiments. In this example, the cursor remains in a fixed location with respect to the display, and the motion input is used to navigate on a virtual desktop that is larger than the device display. FIG. 12B includes a virtual desktop 158 that includes information (eg, a city map) that the user navigates using motion input on the mobile device. The virtual desktop 158 contains more information than can be displayed at one time on a particular mobile device. To illustrate this cursor navigation example, the virtual desktop 158 is divided into grids to distinguish the information shown on the desktop. The grid includes 6 rows (A-F) and 7 columns (1-7). A portion of the grid is identified in this example by row letter and column number (eg, portion B7 or D2). Dividing the virtual desktop 158 into portions referenced by row and column numbers is done only to illustrate and describe the embodiment, and the virtual desktop of a particular embodiment may be used for such a partition or other type of reference. It should be understood that the information need not be included.

  Box 160 shows information of the virtual desktop 158 currently displayed on the mobile device. A display 161 shows a display of a portable device showing a box 160. Display 161 also includes a cursor 159 located at the intersection of portions B2, B3, C2, and C3. As described above, when the user moves around the virtual desktop using motion input (ie, moves to change the information displayed on the device), the cursor remains in a fixed position with respect to the display. However, the position of the cursor relative to the virtual desktop information displayed on the mobile device changes. For example, the user uses motion input to change the information displayed on the device to that displayed in box 162. The information displayed on the device changes (parts B5, B6, C5, C6); the cursor 159 remains fixed on the device display (eg in the center of the display in this example) and is shown on the display 163 As described above, the relative position of the cursor is changed with respect to the information of the virtual desktop 158. If the user wants to use motion input to change the information displayed on the device to that displayed in box 164, the information displayed on the device will be displayed in the portion E3, Changes to E4, F3, F4. The cursor 159 is between the parts shown in the center of the display. This is because the position with respect to the display remains fixed in this embodiment.

  Thus, according to the form of cursor navigation described with respect to FIG. 12B, the cursor remains in a fixed position relative to the device display, and the cursor position relative to the virtual desktop information changes. As discussed with respect to the embodiment shown and described with respect to FIG. 12A, at any point in the navigation process, the user may select any form to select or perform a function according to the information currently indicated by the cursor. It may also be used for input (eg, gesture, key press, etc.). For example, with respect to the display 163, the user may use a specific gesture or button to perform zoom-in, selection, or perform some other function based on the information in the portions B5, B6, C5, C6 currently indicated by the cursor 159 Press.

  It should be understood that any particular input, such as a gesture or key press, may be used to switch the cursor navigation mode on the device. For example, the user may switch between the translation control cursor mode of FIG. 12A and the fixed cursor mode of FIG. 12B.

  As described above, certain embodiments allow a user to move a portable device according to some gesture to perform a specific function or operation. However, in some cases, the user may not be moving the device according to the specific gesture intended. As a result, the device may not recognize the movement as the intended gesture. In order to indicate that a particular movement of the device by the user is recognized as a specific gesture, the portable device of some embodiments provides feedback to notify the user that the movement has actually been recognized as a gesture.

  This feedback may include an audio format such as voice, beep, tone or music, a visual format such as an indicator on the device display, a vibratory format, or some other suitable feedback format. Audio feedback may be provided through a user interface speaker or headphone jack of the device 10, and vibration feedback may be provided via a user interface vibration generation module of the device 10. Auditory, visual and vibrational feedback may be modified to provide functionality for multiple feedback indicators. As an example, vibration feedback may change frequency and amplitude over time, either alone or in various combinations over time. The richness and complexity of feedback may be extended by using feedback one after another in various types of combinations, such as using vibration feedback in combination with audio feedback. In some cases, the feedback may be gesture specific so that one or more recognized gestures have their own respective feedback. For example, if a gesture is recognized, the device may beep on a specific tone or a specific number of times, and if one or more other gestures are recognized, the beep or the number of beeps will vary. Also good. Using audio feedback may be particularly useful for quick visual screen assertions or gestures that do not have functionality (eg, calling some phone number with a cellular phone). In certain embodiments, other types of feedback may be context or application specific. Various situations (contexts) can include device state (what application is focused or used, battery level, available memory, user-defined state (such as quiet or silent mode), etc.) May include. For example, the portable device may use vibratory feedback in response to a gesture input, or may use other audio feedback in silent mode. This feedback process may be used by a portable motion input device of a computer or other element.

  Similar to the feedback on gesture recognition described above, the portable device in certain embodiments provides feedback to the user when the device is in gesture input mode and a particular user's movement is not recognized as a gesture. . For example, if a motion appears to be intended to be a gesture, but a specific gesture known to the device cannot be determined, a failure sound may be sounded. This notifies the user that the device needs to be moved further according to the gesture intended for that device in order to perform the desired action or function. Feedback that informs the user that certain movements have not been recognized may be made audibly, visually, by vibration or in any other suitable format, with specific movements being recognized by the device as specific gestures. The feedback is different from what is notified in the case. To ascertain whether the user's intention was to input a gesture, the mobile device 10 may examine certain movement characteristics that indicate that the movement is intended to be a gesture. Such features may include, for example, motion amplitude, elapsed time of motion above a threshold, number of accelerations and intervals, and the like. If a particular gesture has not been recognized by the device, a gesture feedback system is used to determine the intended gesture. For example, voice feedback indicates a likelihood determined at the mobile device, and the user may utilize gestures to navigate a voice menu to select the intended gesture.

  In certain embodiments, the audio or vibration feedback system allows the user to operate the mobile device 10 without relying on the visual display 12. For example, in one embodiment, the mobile device provides audio, visual, or vibratory feedback to a user navigating a virtual desktop menu or other information. That is, the feedback of this device combined with the user's motion input functions as a kind of “conversation” between the user and the device. As mentioned above, multiple types and complex feedback may be used. The feedback process is particularly advantageous in environments where viewing the device display is not simple, unsafe or impractical (eg, during driving or in a dark environment).

  It should also be understood that feedback, such as audio, visual and vibrational feedback, may be used in embodiments relating to translational input. For example, using translation input, a feedback indicator may be provided when the user reaches the end of the virtual desktop or reaches a limit.

  FIG. 13 is a flowchart 170 according to a particular embodiment illustrating a process for utilizing feedback in response to motion input. In step 172 of the process, raw motion data is received at the mobile device 10. As mentioned above, the raw data may be received by any combination of accelerometers, gyros, cameras, distance meters, and any other suitable motion sensing elements. In step 174, the raw motion data is processed to produce a motion detection output indicative of the motion of the device. Such processing may include various filtering techniques and data fusion from multiple detector elements.

  In step 176, the status of the device is checked. This is because, in some embodiments, the feedback for a particular motion depends on the state of the device when that motion is received. As described above, specific examples of device states may include specific applications of interest or use, battery level, available memory, specific modes (eg, silent mode), and so forth. In step 178, the motion detector output is analyzed for the state of the device. In step 180, it is ascertained whether the motion indicated by the motion detector output is significant or recognizable under certain device conditions. For example, a particular gesture may perform a function in one application (eg, a calendar application), and the gesture may not provide any function in another application. If the gesture is recognizable or meaningful under device conditions, feedback is provided at step 182. As described above, in certain embodiments, the feedback may be in an audio, visual or vibrational format. In some cases, the feedback may simply be an indication that the device has recognized a gesture under a pair on the device. In other examples, for example, if the user was using a specific application on the device (providing a series of inputs to perform one or more functions), the feedback may be further queries for additional input. (query) may be used. In step 184, the device behaves according to the motion input and device status and the process returns to step 172 to receive further raw data.

  If at step 180 it is determined that the motion indicated by the motion detector output is not meaningful or recognizable under the particular device condition, the process proceeds to step 186. In step 186, it is ascertained whether the motion is above a certain threshold level. This confirmation may be made, for example, by determining whether the specific motion input was intended to be a certain gesture. As described above, the threshold characteristics of this determination may include motion input amplitude, motion input elapsed time, number of motion accelerations and intervals, and the like. If it is determined that the motion input has not exceeded a particular threshold, the process returns to step 172 and further raw motion data is received. However, if the motion input is above a threshold, such as a gesture intended but not recognized or meaningful under the device state, feedback is provided at step 188. The feedback may include audio, visual and / or vibratory feedback and may indicate that the gesture was not recognized or meaningful. In a particular embodiment, the feedback may include a query about the intended gesture or provide the user with a number of possible gestures from which the user intended in motion The gesture may be selected. Certain embodiments may not include some of the steps described above (eg, some embodiments may not include the threshold determination of step 186), and other embodiments may include additional steps or It should be understood that the same steps may be included in a different order. As suggested by the above, particular embodiments may utilize motion input feedback (eg, including feedback “interaction”) by any of a number of applications and techniques, and the types and complexity of feedback systems vary. The embodiments may vary greatly.

  As mentioned above, a portable device according to a particular embodiment receives gesture motion input and controls any number of functions of any number of applications running on that device. Some applications that use gestures may include a mobile commerce-mcommerce-application, in which a mobile device such as mobile device 10 performs various transactions, such as commercial or consumer purchases. Used for. Many e-commerce applications use some form of authentication to authenticate the user with respect to personal identification number (PIN), credit card information, and / or ownership of the mobile device. However, many forms of authentication can be “leaky”. They may be used intentionally or accidentally at the same time. Another form of authentication is a user-written signature, which is less susceptible to damage from such leak problems. This is because counterfeiting is generally difficult and easy to detect. Certain embodiments utilize motion input to receive a user's signature as an authentication form for emcommerce or other transactions via a mobile device.

  The written signature may be processed as a two-dimensional record of the gesture. When a mobile device having a motion input function is used, the user's signature may be three-dimensional, in which case a “spatial signature” is configured. Further, when combined with other types of input received at the device, the user's signature may be any number of dimensions (can be erection, 4D, 5D or even more). For example, a three-dimensional gesture spatially “drawn” using the device and detected by the motion detector 22 may be combined with key presses or other inputs to increase the number of dimensions of the signature.

  These spatial signatures can be tracked, recorded and analyzed by the motion detector 22 of the portable device. These can be recorded while changing the accuracy by changing the number of motion detection elements functioning as an effective authentication format. The user's spatial signature may be configured in a three-dimensional form based on the user's conventional two-dimensionally written signature, or an appropriate record that the user records on the mobile device as his or her signature It may be composed of any other gesture.

  In some embodiments, the process of recognizing a spatial signature may include pattern recognition and learning algorithms. The process may analyze the relative timing of key acceleration associated with the signature. These relate to the beginning and end of movement, the degree of movement bending and other movement attributes. In one example, a hash of a data set at a point in the signature movement is stored and subsequent signatures are compared to that hash for authentication. In addition, it determines whether the signature is authentic by checking whether the signature is unique. For example, in certain embodiments, a signature may be detected (eg, by the signature detection module of device 10) by comparing a particular movement of the device relative to an initial or reference location. Such a comparison may be made by comparing the acceleration sequence of the motion with a predetermined acceleration sequence of the stored spatial signature. This determination may be made regardless of the size of the user's input motion signature.

  In some embodiments, the device can detect whether the movement of the device matches the signature by checking whether the position of the device moving from the initial point matches the spatial signature.

  FIG. 14 shows an example system 200 that uses a spatial signature as authentication for an e-commerce transaction. The system 200 includes a mobile device 10, an e-commerce application 202, an authentication device 204, and a communication network 206. The e-commerce application 202 may comprise any suitable application for doing business with a user's mobile device. Such transactions may include consumer purchases, online payments, account (account) management, or any other suitable commercial transaction relating to the company's or other user's goods or services, etc. via a website. The authentication device 204 authenticates or verifies the spatial signature input by the user of the mobile device 10 to complete the e-commerce transaction. The authentication device 204 may store one or more spatial signatures of one or more users for authentication of an e-commerce transaction. In some embodiments, the authentication device may be provided in the mobile device 10, in the e-commerce application 202, or in any other suitable location. The communication network 206 can transmit information or data between the elements of the system 200 and includes a wide area network (WAN), a public switched telephone network (PSTN), a local area network (LAN), the Internet, and / or a global distribution network. One or more of (eg, intranet, extranet, or other types of wireless / wired communication networks, etc.) may be included. The communication network 206 may include any suitable combination of routers, hubs, switches, gateways or other hardware, software, or embedded logic that can exchange multiple information or data exchanges within the system 200. Any communication protocol can be executed.

  In operation, when a user uses the mobile device 10 to conduct a transaction of the e-commerce application 202, the user uses a motion input, for example, by moving the device in response to the user's three-dimensional signature, You may be notified. As an example, a user may use his cellular phone at a point of purchase (eg, a store) instead of a credit card. Instead of signing paper forms that need to be transported and processed, the user only has to move the device 10 according to the user's spatial signature. As described above, the user signature may include more than three dimensions in some embodiments. The signature may be pre-recorded by the user using the mobile device 10 or other mobile device, and the recorded signature is stored on the mobile device 10, the emcommerce application 202, the authentication device 204, or any other suitable device. (E.g., in a signature storage database for signatures of a plurality of e-commerce users).

  The movement of the mobile device 10 may be processed by the device and a motion output indicative of the movement may be transmitted to the e-commerce application 202. The e-commerce application 202 may notify the authentication device 204 for confirmation of the motion output (confirmation that the motion input received by the device 10 is a signature of a user who wishes to make an e-commerce transaction). Once the authentication device 204 verifies the user's signature, the e-commerce application may confirm the transaction with the user. As described above, the authentication device 204 may be provided in the mobile device 10 or in the e-commerce application 202 in certain embodiments and stored in the device 10 in the e-commerce application 202 or some other suitable location. Access to the confirmed signature.

  In non-emcommerce applications, authentication may be performed by the portable device, for example, when electronic security is desired to perform a function that uses the device to transmit private or secure data. Users who wish to transmit data or other information using the mobile device 10 may use their signature in the encryption process. Spatial signatures may be used in any of a variety of ways to protect data communicating over a network, and may be used with public / private key encryption systems. For example, in one embodiment, the mobile device may authenticate the user's signature received via motion input and use its own private key to encrypt the transmitted data. In another example, the data may be transmitted to the mobile device 10 and the intended recipient may have to enter a spatial signature to receive the decrypted data. In some embodiments, the data may be communicated to a computer wirelessly connected to the portable device 10, and the intended recipient uses the portable device 10 as a means of transmitting the user's signature to the computer for data decryption. Must be used. Further, in certain embodiments, the user's spatial signature itself may represent the encryption key so that the motion of the device generates the encryption key, rather than the signature motion being simply used for authentication. In some cases, the device may recognize the combination of accelerations as a signature by converting the signature into an equivalent private key. The portable device may use the private key as part of the transaction authentication process.

  In certain embodiments, spatial signatures may be used to manage physical access to buildings and events. For example, the signature input by the user of the device will be checked against a list of people allowed to enter, will-call to confirm that it has paid and booked for an event. Also good.

  In a particular embodiment, the user uses the motion input of the mobile device to connect other devices, such as audio / video equipment, household equipment and devices, computer devices or any other device (controllable by the mobile device). You may control. The device may be controlled by the portable device 10 via the communication interface 20 of the device 10 utilizing any of a number of wireless or wired protocols including cellular, Bluetooth and 802.11 protocols. In some embodiments, device 10 may receive motion input to control other devices over a network, via wireless or wired communication. Thus, the device controlled by the motion input of the device 10 may be provided anywhere with respect to the device 10, for example, in the same room or across a region.

  As an example, when the mobile device 10 is a cellular phone operating on Bluetooth, certain gestures or other movements of the cellular phone may cause other devices, such as a laptop, to move beyond the room to make a PowerPoint presentation. A command for controlling may be wirelessly communicated. Other devices controlled via the motion input of the portable device 10 are televisions, radios, stereo devices, satellite receivers, cable boxes, DVD players, digital video recorders, lights, air conditioners, heaters, thermostats, security systems. , Kitchen equipment (oven, refrigerator, freezer, microwave oven, coffee maker, bread maker, toaster, etc.), PDA, desktop and laptop PCs, computer peripherals, projectors, radio controlled vehicles, boats, aircraft and any other devices May be included. As another example, a commuter may shake his cellular phone in some way to inform the commuter to activate a heater in the house before arriving at the house. In some embodiments, the mobile device may receive and process raw data to determine commands or intended functions for notifying other devices. In another embodiment, the motion detector of the portable device outputs raw data received from the motion detection element to notify one or more devices controlled by the device 10 through movement of the device 10. As a result, various devices controlled by the device 10 may process the same raw data by the device 10 in various ways. For example, a particular gesture of device 10 may cause different functions of various devices controlled by device 10 to be performed.

  FIG. 15 shows an example system 220 in which the mobile device 10 controls a plurality of other devices through motion input of the device 10. The system 220 includes a mobile device 10, a laptop 222, and a remote device 224 that are coupled through a wireless or wired link, and is coupled to the mobile device 10 via a communication network 226. The portable device 10 receives raw motion data relating to specific movements of the device through motion detection elements such as accelerometers, cameras, distance meters and / or gyros. Unprocessed motion data is processed by the mobile device. Certain databases, such as the gesture database and the gesture mapping database, are accessed to determine matching gestures and intended functions based on movement tracked by the control module of the device. The intended function may relate to other devices controlled by the mobile device 10, such as a laptop 222 or a remote device 224. Therefore, the motion input is an interface related to an operation signal that is assumed to be notified from the device 10 to a device to be controlled (non-control device). In other embodiments, raw data or other data that simply indicates a particular motion input for device 10 is sent directly to laptop 222 and / or remote device 224 without determining function at device 10. May be. In these examples, laptop 222 and / or remote device 224 may receive raw motion received from portable device 10 to determine one or more intended functions or operations to be performed based on raw motion data. You may process the data yourself. In some embodiments, the user of device 10 may inform mobile device 10 of the intended function of other applicable devices or raw motion data via motion or other devices. The device 10 is notified. Although two devices controlled by the mobile device 10 are shown, certain embodiments may include any number of different types of devices controlled by the mobile device 10 via motion input as described above. Should be understood.

  As described above, certain embodiments include the ability to control other devices, such as other local or remote devices, via the motion input of the mobile device 10. In one embodiment, the user of the mobile device 10 selects another device for which the particular motion input of the device 10 is intended for control. For example, the user may select a local or remote device using the user input 14 of the mobile device 10 (eg, by pressing a button or moving a trackball), and the desired function or Control the device 10 before moving it according to the specific movement associated with the movement. However, in certain embodiments, the user moves the mobile device according to a specific gesture and selects another device (eg, another local or remote device) to be controlled at that time via the motion input of device 10. May be. Thus, certain embodiments provide a gesture motion selection function for other devices controlled by the mobile device 10.

  The mobile device 10 may include a device selection module that operates to detect a device selection gesture to confirm that the user wants to select a particular device. Each controllable device may have its own gesture command map, which maps the gestures entered using the device 10 with the controllable device commands. The control module of the portable device selects a specific command map corresponding to the controllable device selected for control. In some embodiments, the apparatus 10 includes a device locator that is operable to detect a direction from the portable device to each remote device for each of a plurality of remote devices. In this case, the user may move the portable device in the direction of the particular remote device that the user wishes to control to select the remote device to control.

  Although the motion input of device 10 may be used for such control of other devices, other types of input (eg, using input 14) may be used for other local selected for control by gesture input. Or it may be used to control a remote device. In other embodiments, various gestures may each be associated to control a different device. In another embodiment, the device 10 displays other devices that it may control and specific gestures to use, and provides user options for other devices that the user wishes to control soon via the device 10. May be shown. A portable device according to the present invention may use any particular gesture selection method for one or more of local or remote devices controlled by the portable device.

  As described above, certain embodiments include a portable device that can detect the motion of the device via the motion detector 22 and alters the device's behavior in some manner according to the detected motion. In some embodiments, the mobile device 10 can model their particular environment and subsequently modify their behavior based on such environment. One difference between modeling the environment of a mobile device and detecting a specific motion of the device is that there is an associated reasoning method in the former, but no such reasoning method in the latter. As an example, when a mobile device is moved according to a specific gesture and changes its behavior, it is considered to detect or detect a specific movement and react based on the detected movement. On the other hand, if it is confirmed that the portable device is placed face down on the table and reacts accordingly, it is considered that it is modeling its environment. As another example, when a mobile device moves to the left and changes its behavior based on such movement, it is considered to detect and react to motion. If you find yourself that the device is free-falling and turn off the power to survive an imminent impact with the ground, it is considered modeling the environment. A further distinction is that environmental modeling does not require a quick response to user input, but event detection, such as a specific motion, generally requires such a quick response. Thus, environmental modeling detects or detects motion patterns (or lack of them), matches detected content with a set of environmental conditions, and modifies device behavior based on the modeled environment. Including doing. The behavior performed based on the modeled environment may also vary based on the particular application used or focused on. In some cases, the device may vary its sensitivity to specific motion based on the modeled environment.

  As an example, the portable device may recognize that it is stationary on a substantially horizontal surface via an accelerometer or other motion sensing element. Such recognition may be obtained by confirming that the acceleration normal to a plane is 1 g without change and that the device is not moving or stationary. The device may be able to distinguish between standing in the user's hand and being on the table. For example, a user's hand generally cannot hold the device completely stopped. As a result, the device may operate in a predetermined manner according to the perception that it is stationary on an approximately horizontal surface. For example, if the portable device recognizes that it is lying on a table, the portable device may be powered down after lying at that location for a predetermined period of time. As another example, a vibrate mode cellular phone may vibrate more gently when ringing or during some other event (which causes the phone to vibrate) when it recognizes it is on the table . In some embodiments, the device recognizes the orientation when lying on the table and behaves in some way (eg, powers down) when lying in a “face down” position, not lying down. You may behave differently if you lie in position. If the mobile device 10 constitutes a cellular phone, the speaker mode may be entered at the time of calling, and it may be recognized at the time of calling that the user is placed at the position of “face up” on the table. On the other hand, if the cellular phone is called and placed on the table, the cellular phone may enter mute mode.

  As another example, the portable device 10 may recognize that it is approximately 0 g for a short period of time and is free-falling, and behave to reduce damage due to an impending collision with the ground or other surface. Such operations may include, for example, turning off the chip and / or hard drive, retracting the lens, covering with a cover, or some other device behavior. In certain embodiments, non-portable devices or devices that do not detect motion for input can model their environment and operate based on the modeled environment. As a further example, an acceleration pattern may be detected and it may be recognized that the mobile device 10 is in an exercise state (environment) (for example, being held by a user in a vehicle or on a train). Sensitivity, thresholds and / or various other characteristics may be adjusted to allow for better performance of the device in a harsh environment.

  In another example, the mobile device 10 may constitute a digital camera. With the motion detection element, the camera may check whether it is mounted on a tripod or whether the user has it at the time of shooting. The camera may set the shutter speed for the photograph based on the determination (for example, the shutter speed is slow if the tripod is provided, and the shutter speed is fast if the user has it).

  If the mobile device 10 uses a cradle to synchronize with another device such as a PC, the device 10 is in the cradle based on its static (or support) and specific orientation You may recognize. The device may operate or function depending on the state it is in the cradle (eg, may be synchronized to the associated PC).

  FIG. 16 is a flowchart 230 illustrating an environmental modeling process according to a specific embodiment. In step 232, unprocessed motion data is received by the mobile device 10. As noted above, raw motion data may be received from any combination of accelerometers, gyros, cameras, distance meters, or some other suitable motion sensing element. In step 234, the raw motion data is processed to produce a motion detection output, from which the motion and direction of the device is determined in step 236. Box 237 includes device movements such as rotation of box 237a about the z-axis, translation of box 237b along the x-axis, direction of box 237c towards specific angles α, θ, ω, and stationary of box 237n and the like. An example of orientation is shown. These are merely examples of device movement and orientation, and any number of movements determined in step 236 may be used. In some embodiments, the determined direction may be the direction of the device relative to gravity.

  In step 238, the mobile device 10 determines its environment based on the movement and direction determined in step 236. Box 239 shows an example of the environment of the device such as lying against the table in box 239a, falling in box 239b, in the train in box 239c, in the hand of box 239d, and so on. Any number of environments may be determined based on the motion and direction determined in step 236. In certain examples, the environmental determination may be based on device history, such as motion / direction history. For example, when using the cellular phone speaker mode feature, if level is detected during a ring after a short ring (eg, a short ring caused by a user lying on the table on his back) The device may detect a quiet period. The telephone can detect that it is ringing, and being quietly in a position perpendicular to gravity may have a different meaning from the absence of ringing. Thus, the determination of the environment may be based on the operation and direction of the device and its history. The history may include previous movement / direction of the device or any other information regarding the history of the device.

  In step 240, the determined environment is associated with a specific action (behavior). In addition to the determined environment, the associated behavior depends on a number of factors, such as the particular user using the device at some point in time and the desired characteristics for the particular application in use or at the point of interest. May be. For example, behavior in accordance with a particular modeled environment may include enabling the mute function of the portable device in box 241a, powering down the device chip to survive the collision of box 241b, and the device in box 241n. It may also include increasing the motion activation threshold. The mute operation shown in box 241 may be performed when the cellular telephone environment is lying on the table while ringing. The chip power-off operation of the box 241b may be performed when the environment of the portable device 10 indicates a free fall of the device. The action to increase the motion activation threshold in box 241n is performed when the environment of the mobile device is in a vehicle or train and a larger motion threshold is required to record the user's motion input as the intended input. Good. Particular embodiments may include any number of behaviors that are associated with one or more modeled environments. In step 242, the portable device operates according to the behavior associated with the environment in step 240.

  As described above, the user moves the mobile device according to a specific gesture and prompts the device to perform a desired function, operation or task. In a particular embodiment, the gesture used as the motion input of the device may consist of pre-arranged symbols such as alphabet letters, symbolic pictures or any other alphanumeric characters, pictograms or expressions. For example, gestures used for motion input may mimic the upper and lower parts of the alphabet, Arabic numerals, Roman numerals, and mnemonic symbols in any language. Advance gestures may also be used for portable input devices for other local and remote devices. Utilizing prior gestures for portable device input encourages the user to learn the process with respect to the gesture motion interface.

  FIG. 17 shows an example of a gesture associated with a specific function. For example, if the mobile device 10 constitutes a cellular phone, the user may move the device 10 into a heart shape (250), call the user's girlfriend, boyfriend or spouse, or move it into a house ( 252), the user's house may be called. In another example, if the portable device configures a PDA or other device that executes an application that manages files or data, moving the device with a C-shaped gesture (254) may be a command to copy data The O-shaped gesture (256) may be a command to open a file, the D-shaped gesture (258) may be a command to delete data, and the X-shaped gesture (260) may be a file or application. It can be a command that terminates. The logical relationship between gestures and their intended function or action (eg, the relationship between “O” and opening a file) further encourages user interaction and learning.

  Any number of pre-existing symbols may be used in motion input gestures as well as commands to perform any number of functions, operations or tasks of the mobile device. In general, many gestures pre-exist in two dimensions. The mobile device may recognize such a gesture. In some cases, for example, the mobile device 10 disables reception for a particular dimension so that if a user is about to input a two-dimensional gesture, any movement in the third dimension is not received or erased. The recognition of the two-dimensional gesture may be promoted. In some cases, the mobile device 10 may receive a three-dimensional gesture based on an existing two-dimensional gesture. Receiving and detecting 3D gestures enhances the performance of the device, for example by increasing the number and type of gestures used as motion input.

  FIG. 18 is a flowchart showing an example of use in which an existing symbol gesture (character “O”) is used as a motion input. As shown in step 272, the user moves the portable device to the shape of the letter “O”. In step 274, the portable device receives raw motion data relating to the “O” motion from the motion detection element, and in step 276, the raw motion data is processed to determine the actual motion of the device. In step 278, the mobile device 10 accesses the gesture database 280, which includes a plurality of gestures that can be recognized by the device to associate the motion with the gesture “O”. Each of the plurality of gestures in the gesture database may be defined by a series of acceleration motions. The actual motion of the device is matched against a series of accelerated motions of gestures in the gesture database. In step 282, the portable device 10 accesses the database 284 (or gesture mapping database) yesterday to associate the gesture “O” with a specific function, and the function database is executed by one or more applications running on the device. Includes features. In certain embodiments, the gesture and function database may be configured in the memory 18 of the device. The particular function associated with the gesture “O” may depend on the particular application that is currently being used or focused on by the user. For example, “O” may constitute a command to open a file in a certain application, but may constitute a command for calling a certain number in another application. In some cases, one gesture may be associated with the same function for all applications on the device. In step 286, the device operates according to the associated function, such as opening a file.

  As described above, gestures used for motion input on the mobile device 10 can be a specific application that is being used or focused on, a specific device state related to the application, or a specific modeled environment or any of them. May have different meanings (eg, functions, operations, tasks, etc.) based on a particular situation, such as a combination of or any other context. For example, a particular gesture may be associated as a command to scroll up the page when a web browser is running on the device, or the gesture checks for another day when a calendar program is running It may be associated as a command. The function of associating specific gestures with different commands depending on the situation, such as the application used, greatly improves the functionality of the device.

  In some embodiments, if the gesture is mapped to a different command depending on the situation, the portable device can utilize a simpler motion detection element. As an example, the mobile device may include a specific motion detection element so that the mobile device can only recognize and distinguish 20 different gestures. If each gesture is associated with a different function for each of the four different applications, even the ability to recognize only 20 unique gestures can provide as many as 80 different functions (20 per application) for the device. If each gesture was associated with its own function, whatever the application of interest is; the overall device capabilities may be reduced and some gestures may be less likely to be used in any application. The ability to use less complex elements is the ability to recognize and distinguish between fewer gestures as a result of associating gestures with multiple functions depending on the situation, which is the cost of the elements used in the device The task of physically learning the gestures necessary to control the device can also be simplified. As described above, gestures may be associated with different functions, actions or tasks depending on the application in use, the state of the device, the modeled environment or other circumstances. In some cases, gestures are associated with different functions depending on the state of a particular application. For example, in the case of a word processing program, a certain gesture has a certain function in a certain state of the program (for example, a menu state), but the same gesture is different in another state of the word processing program (for example, a document editing state). It may have a function. In this case, the command map related to the correspondence relationship between the gesture functions may include such gesture mapping for each state.

  FIG. 19 is a flowchart 290 that utilizes context-based gesture mapping according to a specific embodiment. In the illustrated example, a certain gesture has various functions determined based on an application of interest. In step 292, the portable device 10 receives raw motion data for a particular gesture motion and processes such raw data in step 294 to determine the actual motion of the device. In step 296, the mobile device 10 associates the motion with the gesture, for example, by accessing a gesture database. In step 298, the portable device 10 determines which application is focused on. For example, if the device was able to run four different applications, it would be confirmed which of the four were focused on and which were being used at that time. The device performs the function associated with the gesture according to the application of interest. Such function verification may be accomplished by accessing a function database in one embodiment, which may be referred to as a gesture mapping database because it associates gestures in the gesture database with functions. In the illustrated example, if attention is paid to application 1, the device executes function 1 in step 300a; if attention is paid to application 2, the device executes function 2 in step 300b; If focused, the device executes function 3 in step 300c; and if focused on application 4, the device executes function 4 in step 300d.

  As a further example of context-dependent gesture mapping, a portable device with a phone function and a PDA function may execute four applications (phone application, calendar application, file management application, and email application). A gesture input that imitates the letter “S” may have different functions depending on the application of interest. For example, when paying attention to the telephone application, receiving the gesture input “S” is a command for calling a specific number designated by the gesture of “S”. When paying attention to the calendar application, receiving the gesture input “S” is a command for scrolling in September of the calendar. When focusing on the file management application, receiving the gesture input “S” is a command for saving the file. When attention is paid to an electronic mail application, receiving the gesture input “S” is a command for transmitting an electronic mail. Certain embodiments provide great flexibility in terms of the ability to map gestures to various functions that depend on the situation.

  As described above, gestures may have various functions depending on the particular situation at a certain point in time. In certain embodiments, the portable device is tailored (customized) to allow the user to assign device functions to predetermined gestures. The function is context-dependent, such that the gesture has different functions depending on the application in use, depending on the state of the device or the modeled environment. The portable device in certain embodiments may allow different users of the same device to assign different functions to the same gesture, and such functions are also context sensitive as described above.

  For example, the mobile device 10 may be used by many different users at different times. Each user may assign different functions for the same gesture. When a mobile device receives a gesture input, it is necessary to know which user is using the device at that time in order to confirm the function that the user wants the device to perform. The device may identify the user in any of a variety of ways. In some embodiments, the user may log in to the device prior to use using a username and password or otherwise. In other embodiments, the mobile device may be able to identify the user based on some technique (a technique in which the user moves the device for motion input so that the user makes a gesture using the device). As described above, each user may assign a command to a gesture depending on the situation, based on an application of interest on the device. The ability of the mobile device to associate functions with gestures based on a particular user increases the performance and flexibility of the device, which is particularly advantageous when the device can only recognize and distinguish a certain number of gestures.

  FIG. 20 shows a flowchart 310 that utilizes user-based gesture mapping according to a particular embodiment. In the illustrated example, the gesture has various functions assigned based on the user using the device. In step 312, the mobile device 10 receives raw motion data for a particular gesture motion, and in step 314, such raw motion data is processed to determine the actual motion of the device. In step 316, the mobile device 10 associates the motion with the gesture, for example, by accessing a gesture database. In step 318, the portable device 10 checks which user is using the device. Such confirmation may be performed, for example, through a system history (a history of a user logging into the device before use). The mobile device 10 may determine the current user through yet another suitable method. In step 320, the device performs the function assigned to gesture input based on the user using the device. In the illustrated example describing a process with four possible users, if user 1 was using the device, at step 320a the device would perform function 1; if user 2 was using the device For example, in step 320b, the device performs function 2; if user 3 was using the device, in step 320c, the device performed function 3; if user 4 was using the device, step 320d The device then performs function 4.

  As described above, in some embodiments, gestures may be assigned to various functions based on both the user using the device and the situation. In this situation, the flowchart 310 described above may have an additional step of determining the current situation (context) (eg, step 298 of the flowchart 290 identifies the application of interest). .) The particular function that is desired to be performed by a predetermined gesture depends on both the user who is currently using the device and the situation (eg, the particular application of interest at that time).

  As previously described, some embodiments have a portable device with the ability to receive pre-existing symbols as gestures for motion input. In addition to such embodiments, other embodiments may include the ability for users to create their own gestures that map to functions and / or keys. The gesture may consist of any user-generated symbol or other movement that the user wishes to use as a motion input for one or more specific functions, operations or tasks that the device can perform It is. The user can create a motion with some personal meaning (motion) to make it easier for the user to remember the motion command or intended function.

  FIG. 21 is a flowchart 330 according to a particular embodiment illustrating an assignment process for a user created gesture. In step 332, an instruction regarding creation of a gesture is received from a user. The indication may be received in any of a variety of ways, using any suitable input format (eg, key, trackball, motion, etc.). The user moves the device according to the user-created gesture, and raw motion data relating to the user-created gesture is received at step 334 at the portable device. The raw motion data may consist of a series of acceleration motions until an instruction to stop recording the position is received from the reference reference position after stabilization of the device. The start and end instructions for recording a user-created gesture may include a dynamic instruction or a stationary instruction (eg, key press and key release). Unprocessed motion data is processed in step 336. In step 338, the motion is stored as a gesture, for example in a gesture database. In certain embodiments, the gesture generation instruction may be received after the user moves the device in accordance with the user-created gesture. For example, the user may move the device according to a gesture created by the user (currently not recognized by the device). The device queries the user to determine if the user wishes to store an unrecognized gesture for a particular function. The user may respond positively and the user may make the gesture available for future motion input.

  In step 340, function mapping information regarding the gesture is received from the user. The function mapping information may include device functions, operations or tasks that are desired to correspond to user-created commanded gestures. In certain embodiments, such function mapping information may consist of a series of functions (eg, macros) commanded by a gesture. The user may assign various functions to gestures according to the application of interest. In some cases, the user may wish to associate various gestures with various keys or keystrokes of the device. An example of associating a series of functions with a gesture includes associating a long character string with a gesture (for example, when a telephone number includes a plurality of poses). In step 342, the function mapping information is stored, for example, in a function database or in a gesture mapping database.

  As described above, it may be difficult for the user to move the mobile device 10 with the same accuracy each time a gesture is used as an input for one or more gestures. Particular embodiments allow varying accuracy with respect to gesture input. The accuracy defines how accurately a gesture must be performed to make a match between a gesture recognized by the device and a gesture contained in a gesture database accessed by the device. The more the user generated motion needs to be closer to the gesture in the gesture database, the more difficult it is to successfully execute such a gesture motion. As described above, in certain embodiments, a motion is matched with a gesture database gesture by matching a series of detected accelerations of motion with those of a gesture in the gesture database.

  As the accuracy of gestures required for recognition increases, there may be more gestures (with the same complexity) that can be recognized differently. As an example, if the required accuracy is zero, the device will only recognize a single gesture, but will easily recognize it. This is because whatever the user has done is recognized as the gesture. However, if the required accuracy is infinite, it is virtual and impossible for the user forming the gesture, but the gesture is recognized by the device, and the device can only count myriad gestures. We can support them by identifying them with a small amount of difference. One field in which accuracy requirements are specially applied is the field of spatial signatures. This is because in the spatial signature, the accuracy level is largely related to the security level.

  In certain embodiments, the accuracy required of the portable device 10 for gesture input may be varied. Different levels of accuracy require different users, different areas of the “gesture space” (for example, similar gestures need to be performed with high accuracy to recognize, but very creative gestures are May not require as much accuracy), various individual gestures such as signatures, and various functions associated with a gesture (eg, more important functions may be recognized by their respective May require higher accuracy for gesture input.) Further, in certain embodiments, the user may be able to set the required accuracy level for gestures in one or more gesture spaces or for all or some gestures. As an example, the user may set the required accuracy for the spatial signature higher than for other gestures of the user, increasing the security of the spatial signature input.

  As described above, in certain embodiments, a gesture may be recognized by detecting a series of accelerations of the device when the device is moved along a path by the user according to the intended gesture. Recognition is performed when a series of accelerations collated by the device matches a gesture in the gesture database.

  In some embodiments, each gesture that can be recognized by the mobile device 10 or each gesture in the gesture database may include a matrix of three-dimensional coordinates. Further, the user movement intended as a gesture input may include a matrix of three-dimensional coordinates. The mobile device 10 may determine the intended gesture by comparing the matrix of movement with each matrix of recognizable gestures (each gesture in the gesture database). If the user moves the device so that the movement matrix is associated with each point (coordinate) of the intended gesture matrix, the user is considered to input the intended gesture with full accuracy. As the accuracy required for gesture input decreases, the allowable difference between the movement of the user's gesture and the intended gesture in the gesture recognition gesture database increases.

  FIG. 22 shows three gesture inputs using a portable device that changes the accuracy level. In the illustrated example, the intended gesture constitutes “O”. Gesture motion 350 is input as 100% accuracy for a perfect “O”, ie, the intended gesture. The gesture motion 352 is a case where a complete “O” is not formed, and is an input with an accuracy lower than 100%. The gesture 354 is an input with lower accuracy than the gesture motion 352. The accuracy condition for the input of the gesture “O” is set by a portable device that accepts a change in accuracy level. Only gesture motion 350 is recognized as gesture “O”, both gesture motions 350 and 352 are recognized as gesture “O”, or all gesture motions 350, 352, 354 are recognized as gesture “O”. The accuracy level may be set as described. As described above, increasing the accuracy condition increases the space available for additional recognizable gestures. For example, if the accuracy level of the mobile device 10 is set such that only the gesture motion 350 is recognized as a gesture “O”, the gesture motions 352 and 354 may be recognized as separate gestures.

  In certain embodiments, the mobile device may change the recognized gesture to perform a particular function based on the user's personal accuracy. In this way, the mobile device has a dynamic learning function of gesture mapping. For example, if a particular gesture in the gesture database is associated with a particular function and the user's repetitive attempt to enter that gesture is inaccurate in a consistent manner, the mobile device may May be changed to the user's consistent gesture movement so that the user's consistent gesture input can be associated with a particular function.

  As an example, if a particular gesture consists of a square motion, and the user's motion performed on that gesture has consistently (eg, multiple consecutive times) performed a triangular motion many times The mobile device recognizes that consistent difference between the intended gesture and the actual user's movement, and converts the intended gesture in the gesture database corresponding to the desired function to the user's actual consistent motion (e.g., Change to triangular motion. Thus, after such a change is made, whenever a user enters a triangular gesture, the function previously associated with the square gesture is commanded. The device may determine the intended gesture in any of a variety of ways via any input format, for example, through two-way communication with the user. In certain embodiments, dynamic learning of user input features in this manner may be applied user-specific. For example, in the above example, another user may still enter a square gesture to command the same function using the same mobile device.

  As described above, if the accuracy of the user motion regarding the intended gesture is high, the number of gestures that can be used to correspond to the function increases. In some embodiments, the mobile device may recognize that the user's accuracy will increase over time, so that the device may increase the available gestures. Increasing the number of gestures available for input may also increase the ability to communicate through gesture input.

  As an example, the user's personal accuracy for inputting gestures may only be that the user can input a predetermined number of gestures recognized by the mobile device. However, over time, the user's personal accuracy may increase. This increase may be recognized by the portable device, so that the device may allow adding gestures that the user may use as gesture input. In one example, allowing a gesture to be added is made when the user's accuracy increases beyond a certain threshold or a predetermined accuracy level. As the user's accuracy is increased, the portable device can recognize that the user is about to input these additional gestures. As described above, preparing additional gestures for input by the user may increase the number of functions that the user can command through gesture input because each gesture is associated with various functions.

  In certain embodiments, the mobile device may allow the user to set and change the device noise threshold. The noise threshold is the amount of device movement that must be detected in order to be considered the user's intended motion input. For example, if the noise threshold is set low, even the smallest motion of the device may be considered by the device as a motion input. However, if the noise threshold is set high, greater device movement is required before the input intended by the user is considered motion. For example, if the user is driving on a bumpy road with a car, the user will set a high noise threshold, and if the device moves due to bumps on the road, the device recognizes such movement as the intended motion input. You might want to avoid it.

  In certain embodiments, the noise threshold may change automatically at the device based on the modeled environment. For example, if the device confirms that the environment is moving in a car, the device may automatically increase the noise threshold so that small movements caused by the car are not expressed as the user's intended motion.

  FIG. 23 is a flowchart 370 according to a particular embodiment illustrating a gesture recognition process using a number of features described herein. In step 372, raw motion data for a particular gesture motion is received. The raw motion data is processed at step 374 to confirm the actual motion of the device. Such processing may include various sorting techniques (filtering) and fusion on data by multiple detection or sensing elements. The association between the actual exercise and the gesture may include accessing a user settings database 388 that stores user data (eg, user accuracy, noise characteristics or thresholds, user created gestures, and others User-specific data or information including the user identifier 381). For example, various users of mobile devices have different settings and motion input characteristics, so user specific information is important. For example, older people may use fewer gestures because older people may be less accurate than younger people when entering gestures. In addition, more skilled people will be able to use more functions through gesture input.

  The user setting database 378 may also include environmental model information that is a factor in determining a gesture that can be used at that time. As described above, through environment modeling, the device can internally indicate the effects and environments that the environment tends to have in gesture recognition. For example, the device may automatically raise the noise threshold level when the user is on a train. Depending on how crowded the gesture space is in the vicinity of the gesture being considered, the device may reduce the required accuracy. The association between the actual motion and the gesture may include accessing the gesture database 382.

  In step 384, the gesture is associated with the function of the device. This step may include accessing a function mapping database 386 that includes relationships (information) between gestures and functions. Different users may have a correspondence that maps gestures to multiple functions and functions created by different users. Thus, the function mapping database 386 provides user specific mapping instructions or features, user created functions (eg, macros and / or phone numbers) and other applicable to mapping a particular gesture to one or more functions. Any functional information may be included. In some embodiments, gestures may be associated with individual keystrokes. The user's identity (identifier) 381 may also be accessed at this step. In addition, device status (context) information 388 may also be accessed and used in the gesture. The situation information may include environment model information 389, information 390 of the application of interest, device status information 391 (eg, time and date information, location information, battery status and mode information (eg, silent mode)), and the like. . In step 392, the device may perform one or more appropriately associated functions, for example, function 1 in step 392a, function 2 in step 392b, and function 3 in step 392c.

As described above, in certain embodiments, the portable device 10 may constitute a cellular telephone with many of the functions described herein. For example, a cellular telephone with a motion input function may use motion input for the pattern menu as described above. Cellular phones detect device conditions and environments such as free-falling, cellular phones lying down or lying on their back, and correlate operations such as mute, speakerphone and power off May be. Other methods of detecting device status may include detecting that the phone is not muted or in speakerphone status. The cellular telephone may control dialing (eg, by quick dialing by gesture) or lock / unlock the device keypad. For example, the device may be moved in a heart shape to call home, clockwise to call the office, counterclockwise to call the workplace, and to call an important person. The user may program the cellular telephone for customized gesture mapping.

  In certain embodiments, the portable device 10 constitutes a digital camera and uses motion input for at least some of the functions described herein. For example, a digital camera with a motion input function may use motion input to flatten the menu as described above. Motion may be used by a user to enlarge (and reduce) a still or moving image and explore more with smoother and more intuitive functions. Motion is used to zoom in and zoom out a large number of thumbnails of a photo or video clip, and may make it easier to choose one or more to consider. A virtual desktop can be used to review many digital photos and video clips by translating the camera or using gesture input, or review many thumbnails of many digital photos and video clips. May be used. Gestures and single motions can be used in combination with other interface mechanisms or alone to change various settings of digital still and video cameras, such as flash settings, focus type and light detection mode etc. Also good. Furthermore, a free fall may be detected and guided to protect the camera itself from damage due to an impending collision. Such protection may include turning off all or part of the camera, closing the lens cover, retracting the lens, and the like.

  In particular embodiments, portable device 10 may constitute a digital watch and may utilize motion input for at least some of the functions described herein. For example, a digital wristwatch with a motion input function may use motion input to flatten the menu described above. In some embodiments, tapping a watch or certain gestures may be used to put the watch into silent mode. Other functions may be accessed through taps, rotations, translations, and other more complex gestures. These functions can change the time zone, set the watch (eg, set the time, make other adjustable settings), change the mode (eg, time, alarm, stop) Watch), starting the backlight, using a stopwatch (starting, stopping and splitting the stopwatch, etc.) and starting and stopping other timers, etc.

  In certain embodiments, motion detection may be separate from the display. For example, the display may be provided on eyeglasses or contacts, other parts of the mobile device may be distributed across the user's body, and the display may not be part of the physically same element as the motion input device or element.

  As noted above, certain drawings illustrate various methods, flowcharts, and processes performed in certain embodiments. In various embodiments without departing from the scope of the invention, the steps may be performed in any order, and the steps in a particular method, flowchart or process may be combined with other methods, flowcharts or processes. It should be understood that it may be combined with other steps in the same method, flowchart or process.

  Although the invention has been described in detail with reference to specific embodiments, it should be understood that various other modifications, substitutions and alternatives may be made thereto without departing from the scope and spirit of the invention. is there. For example, although the present invention has been described with respect to various elements included in the portable device 10, these elements may be combined, organized, or provided according to specific architecture or requirements. Further, any of these elements may be provided as separate external elements where appropriate. The present invention contemplates great flexibility that extends to these external elements in addition to their internal elements.

  Many other changes, substitutions, variations, substitutions and modifications will be recognized by those skilled in the art, and the present invention covers all such variations, substitutions, variations, substitutions and modifications within the scope and spirit of the claims. Intended to include.

FIG. 2 is a diagram illustrating a portable device having a motion interface function according to a specific embodiment. FIG. 2 illustrates a motion detector of the portable device of FIG. 1 according to a specific embodiment. It is a figure which shows the usage example of the motion detection element of the portable apparatus of FIG. 1 by a specific Example. It is a figure which shows the example of a portable apparatus provided with the motion detection function by a specific Example. FIG. 5 illustrates the dominant motion selection and amplification of a mobile device according to a specific embodiment. FIG. 7 is a flowchart for explaining preferred motion selection according to a specific embodiment. FIG. Fig. 5 shows a flow chart for setting a reference point of a mobile device according to a specific embodiment. FIG. 6 illustrates a scrubbing function by a mobile device in a specific embodiment for virtual desktop navigation. FIG. 9 is a flowchart illustrating the scrubbing process of FIG. 8 according to a specific embodiment. FIG. 7 is a diagram illustrating an example navigation menu using gesture input according to a specific embodiment. FIG. 6 illustrates example gestures used to perform various functions on a portable device according to certain embodiments. FIG. 6 is a diagram illustrating an example navigation map using motion input according to a specific embodiment. FIG. 6 illustrates some form of motion input cursor navigation according to certain embodiments. FIG. 7 illustrates another form of motion input cursor navigation according to certain embodiments. 6 is a flowchart according to a particular embodiment that utilizes a process that utilizes feedback in response to motion input. FIG. 2 illustrates an example system that uses a spatial signature on a portable device according to a specific embodiment. FIG. 6 illustrates an example system according to a specific embodiment in which a mobile device motion input controls multiple other devices. 6 is a flowchart illustrating an environment modeling process for a portable device according to a specific embodiment. It is a figure which shows the example of a gesture matched with the various functions of the portable apparatus by a specific Example. FIG. 5 is a flowchart according to a particular embodiment that utilizes pre-existing symbol gestures. 6 is a flowchart that utilizes context-based gesture mapping according to certain embodiments. 6 is a flowchart utilizing user-based gesture mapping according to a specific embodiment. FIG. 7 is a flowchart according to a specific embodiment illustrating an assignment process for a user created gesture. FIG. FIG. 6 illustrates three gesture inputs according to a specific embodiment utilizing a portable device that changes accuracy levels. 6 is a flowchart according to a particular embodiment illustrating a gesture recognition process that utilizes multiple features.

Claims (27)

  1. A motion control portable device,
    A first accelerometer that detects acceleration along a first axis;
    A second accelerometer that detects acceleration along a second axis perpendicular to the first axis;
    A tilt detection element for detecting a rotational component around at least one of the first axis and the second axis;
    A display that displays the current image,
    A motion tracking module that tracks the movement of the device in three dimensions using the first accelerometer, the second accelerometer and the tilt sensing element;
    A controller that generates the current image and changes the current image in response to movement of the device;
    A mobile device for motion control, comprising:
  2. The display has a display surface;
    The motion control portable device according to claim 1, wherein the first and second axes are substantially parallel to the display surface.
  3. The tilt detection element includes a third accelerometer that detects acceleration along a third axis that is perpendicular to the first axis and perpendicular to the second axis;
    Based on the acceleration measured by the third accelerometer, the motion tracking module translates in a plane formed by the first axis and the second axis, and the first axis and the second axis. The motion control portable device according to claim 1, wherein a rotation having a component around at least one of the rotation is distinguished.
  4. The tilt detection element is
    A third accelerometer that detects acceleration along a third axis that is perpendicular to the first axis and also perpendicular to the second axis;
    A camera that generates a video stream;
    A video analysis module for detecting a direction of motion based on the video stream;
    The motion control portable device according to claim 1, further comprising:
  5. The tilt detection element has a distance measuring device for determining distance information including a distance between an object in the video stream and the device;
    The motion control portable device according to claim 4, wherein the video analysis module uses the distance to determine the magnitude of the translational motion with respect to the distance.
  6. The tilt detection element is
    A first camera that produces a first video stream and is focused in a first direction along a third axis that is perpendicular to the first axis and also perpendicular to the second axis;
    A second camera that generates a second video stream and is focused in a second direction opposite to the first direction along the third axis;
    A video analysis module for detecting a direction of motion of the device based on the first video stream and the second video stream;
    The motion control portable device according to claim 1, further comprising:
  7. The motion control portable device according to claim 6, wherein the tilt detection element further includes a third accelerometer that detects acceleration along the third axis.
  8. The video analysis module is
    Detecting a first edge of an object in the first video stream;
    Detecting a second edge of an object in the second video stream;
    Check the movement of the first edge and the second edge,
    Determining a difference between the movement of the first edge and the movement of the second edge;
    The motion control portable device according to claim 6, wherein the inclination component and the translation component are determined based on the difference.
  9. A gesture database including a plurality of gestures defined by movement of the device with respect to the first position of the device;
    A gesture mapping database that associates each gesture with a corresponding command;
    The controller confirms the received gesture by comparing the movement of the tracked device with the gesture, confirms the associated command associated with the received gesture, and executes the confirmed command. The motion control portable device according to claim 1, wherein the current image is corrected.
  10. The motion tracking module is further operative to verify translational motion of the device in a plane formed by the first axis and the second axis based on the motion of the device;
    The current image is a subsection of a larger image;
    The controller of claim 1, wherein the controller continuously modifies the current image to display another subsection in a larger image based on the position of the storage due to translational motion. Motion control portable device.
  11. The motion tracking module operates to ignore accelerations detected by the first accelerometer and accelerations detected by the second accelerometer that appear below a certain noise threshold. The motion control portable device according to claim 1.
  12. A method for controlling a portable device comprising:
    Detecting an acceleration along a first axis using the first acceleration;
    Detecting an acceleration along a second axis perpendicular to the first axis using a second acceleration;
    Detecting a rotation component around at least one of the first axis and the second axis using a tilt detection element;
    Tracking the movement of the device in three dimensions using the first accelerometer, the second accelerometer and the tilt sensing element;
    Generating the current image using a display of the device and changing the current image in response to the tracked movement of the device;
    A method characterized by comprising:
  13. The display has a display surface;
    The method of claim 12, wherein the first and second axes are substantially parallel to the display surface.
  14. Detecting an acceleration along a third axis perpendicular to the first axis and perpendicular to the second axis using a third accelerometer of the tilt detection element;
    Based on the acceleration measured by the third accelerometer, translation in a plane formed by the first axis and the second axis, and around at least one of the first axis and the second axis. Distinguishing rotations having components;
    13. The method of claim 12, comprising:
  15. Detecting an acceleration along a third axis perpendicular to the first axis and perpendicular to the second axis using a third accelerometer of the tilt detection element;
    Monitoring the video stream generated by the camera of the device;
    Detecting a direction of motion based on the video stream;
    13. The method of claim 12, comprising:
  16. Determining distance information including a distance between an object in the video stream and the device;
    Determining the magnitude of translational movement of the device using the distance;
    The method according to claim 15.
  17. A first video stream generated by a first camera of the device is monitored, the first camera being along a third axis that is perpendicular to the first axis and also perpendicular to the second axis Is focused in the first direction,
    The second video stream generated by the second camera of the device is monitored, and the second camera is focused in a second direction opposite to the first direction along the third axis. ,
    Detecting a direction of motion of the device based on the first video stream and the second video stream;
    13. The method of claim 12, wherein:
  18. Detecting a first edge of an object in the first video stream;
    Detecting a second edge of an object in the second video stream;
    Check the movement of the first edge and the second edge,
    Determining a difference between the movement of the first edge and the movement of the second edge;
    The method according to claim 17, wherein the gradient component and the translation component are determined based on the difference.
  19. Comparing the gesture database containing a plurality of gestures defined by the movement of the device relative to the first position of the device with the tracked movement of the device;
    Check the command associated with the received gesture,
    The method of claim 12, wherein the current image is modified by executing a confirmed command.
  20. A logic device for controlling a portable device embedded in a computer readable medium,
    Detecting an acceleration along a first axis using the first acceleration;
    Detecting an acceleration along a second axis perpendicular to the first axis using a second acceleration;
    Detecting a rotation component around at least one of the first axis and the second axis using a tilt detection element;
    Tracking the movement of the device in three dimensions using the first accelerometer, the second accelerometer and the tilt sensing element;
    Generating the current image using a display of the device and changing the current image in response to the tracked movement of the device;
    A logic device that causes a computer to execute.
  21. Detecting an acceleration along a third axis perpendicular to the first axis and perpendicular to the second axis using a third accelerometer of the tilt detection element;
    Based on the acceleration measured by the third accelerometer, translation in a plane formed by the first axis and the second axis, and around at least one of the first axis and the second axis. Distinguishing rotations having components;
    21. The logic device according to claim 20, wherein the logic device is executed by a computer.
  22. Detecting an acceleration along a third axis perpendicular to the first axis and perpendicular to the second axis using a third accelerometer of the tilt detection element;
    Monitoring the video stream generated by the camera of the device;
    Detecting a direction of motion based on the video stream;
    21. The logic device according to claim 20, wherein the logic device is executed by a computer.
  23. Determining distance information including a distance between an object in the video stream and the device;
    Determining the magnitude of translational movement of the device using the distance;
    21. The logic device according to claim 20, wherein the logic device is executed by a computer.
  24. A first video stream generated by a first camera of the device is monitored, the first camera being along a third axis that is perpendicular to the first axis and also perpendicular to the second axis Is focused in the first direction,
    The second video stream generated by the second camera of the device is monitored, and the second camera is focused in a second direction opposite to the first direction along the third axis. ,
    Detecting a direction of motion of the device based on the first video stream and the second video stream;
    21. The logic device of claim 20, causing a computer to execute.
  25. Detecting a first edge of an object in the first video stream;
    Detecting a second edge of an object in the second video stream;
    Check the movement of the first edge and the second edge,
    Determining a difference between the movement of the first edge and the movement of the second edge;
    25. The logic device of claim 24, causing a computer to determine a slope component and a translation component based on the difference.
  26. Comparing the gesture database containing a plurality of gestures defined by the movement of the device relative to the first position of the device with the tracked movement of the device;
    Check the command associated with the received gesture,
    21. The logic device according to claim 20, further comprising causing a computer to execute a confirmed command to modify the current image.
  27. A motion control portable device,
    Means for detecting acceleration along the first axis using the first acceleration;
    Means for detecting an acceleration along a second axis perpendicular to the first axis;
    Means for detecting a rotational component about at least one of the first axis and the second axis;
    Means for tracking the movement of the device in three dimensions based on the acceleration along the first axis, the acceleration along the second axis, and the rotational component;
    Means for generating the current image using a display of the device and changing the current image in response to the tracked movement of the device;
    A mobile device for motion control, comprising:
JP2007504983A 2004-03-23 2005-03-07 Identification of mobile device tilt and translational components Pending JP2007531113A (en)

Priority Applications (19)

Application Number Priority Date Filing Date Title
US10/807,568 US7180501B2 (en) 2004-03-23 2004-03-23 Gesture based navigation of a handheld user interface
US10/807,563 US7301526B2 (en) 2004-03-23 2004-03-23 Dynamic adaptation of gestures for motion controlled handheld devices
US10/807,571 US7176887B2 (en) 2004-03-23 2004-03-23 Environmental modeling for motion controlled handheld devices
US10/807,572 US20050212760A1 (en) 2004-03-23 2004-03-23 Gesture based user interface supporting preexisting symbols
US10/807,564 US7180500B2 (en) 2004-03-23 2004-03-23 User definable gestures for motion controlled handheld devices
US10/807,569 US7301528B2 (en) 2004-03-23 2004-03-23 Distinguishing tilt and translation motion components in handheld devices
US10/807,559 US7176886B2 (en) 2004-03-23 2004-03-23 Spatial signatures
US10/807,558 US7280096B2 (en) 2004-03-23 2004-03-23 Motion sensor engagement for a handheld device
US10/807,567 US7365737B2 (en) 2004-03-23 2004-03-23 Non-uniform gesture precision
US10/807,588 US7176888B2 (en) 2004-03-23 2004-03-23 Selective engagement of motion detection
US10/807,557 US7365735B2 (en) 2004-03-23 2004-03-23 Translation controlled cursor
US10/807,561 US7903084B2 (en) 2004-03-23 2004-03-23 Selective engagement of motion input modes
US10/807,566 US7173604B2 (en) 2004-03-23 2004-03-23 Gesture identification of controlled devices
US10/807,570 US7180502B2 (en) 2004-03-23 2004-03-23 Handheld device with preferred motion selection
US10/807,560 US7365736B2 (en) 2004-03-23 2004-03-23 Customizable gesture mappings for motion controlled handheld devices
US10/807,565 US7301527B2 (en) 2004-03-23 2004-03-23 Feedback based user interface for motion controlled handheld devices
US10/807,562 US20050212753A1 (en) 2004-03-23 2004-03-23 Motion controlled remote controller
US10/807,589 US7301529B2 (en) 2004-03-23 2004-03-23 Context dependent gesture response
PCT/US2005/007409 WO2005103863A2 (en) 2004-03-23 2005-03-07 Distinguishing tilt and translation motion components in handheld devices

Publications (1)

Publication Number Publication Date
JP2007531113A true JP2007531113A (en) 2007-11-01

Family

ID=35005698

Family Applications (2)

Application Number Title Priority Date Filing Date
JP2007504983A Pending JP2007531113A (en) 2004-03-23 2005-03-07 Identification of mobile device tilt and translational components
JP2008192455A Active JP4812812B2 (en) 2004-03-23 2008-07-25 Identification of mobile device tilt and translational components

Family Applications After (1)

Application Number Title Priority Date Filing Date
JP2008192455A Active JP4812812B2 (en) 2004-03-23 2008-07-25 Identification of mobile device tilt and translational components

Country Status (4)

Country Link
EP (1) EP1728142B1 (en)
JP (2) JP2007531113A (en)
KR (1) KR100853605B1 (en)
WO (1) WO2005103863A2 (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007193656A (en) * 2006-01-20 2007-08-02 Kddi Corp Personal identification device
JP2010011431A (en) * 2008-05-27 2010-01-14 Toshiba Corp Wireless communication apparatus
WO2010010929A1 (en) * 2008-07-23 2010-01-28 株式会社セガ Game device, method for controlling game, game control program and computer readable recording medium storing program
JP2010067142A (en) 2008-09-12 2010-03-25 British Virgin Islands CyWee Group Ltd Inertia sensing device
JP2010118060A (en) * 2008-11-14 2010-05-27 Samsung Electronics Co Ltd Method for operating ui based on motion sensor and mobile terminal using the same
JP2011018161A (en) * 2009-07-08 2011-01-27 Nec Corp Portable terminal, and application operation method in portable terminal
JP2011526192A (en) * 2008-06-27 2011-10-06 マイクロソフト コーポレーション Dynamic selection of tilt function sensitivity
JP2012502344A (en) * 2008-09-04 2012-01-26 エクストリーム リアリティー エルティーディー.Extreme Reality Ltd. Method system and software for providing an image sensor based human machine interface
JP2012506100A (en) * 2008-10-15 2012-03-08 インベンセンス,インク.Invensense,Inc. Mobile device with gesture recognition
JP2012507802A (en) * 2008-10-29 2012-03-29 インベンセンス,インク.Invensense,Inc. Control and access content using motion processing on mobile devices
JP2012510109A (en) * 2008-11-24 2012-04-26 クアルコム,インコーポレイテッド Illustrated method for selecting and activating an application
JP2012165073A (en) * 2011-02-03 2012-08-30 Sony Corp Controller, control method, and program
JP2012530958A (en) * 2009-06-19 2012-12-06 アルカテル−ルーセント Gesture on a touch sensitive input device to close a window or application
JP2012244264A (en) * 2011-05-17 2012-12-10 Funai Electric Co Ltd Image forming device
JP2013500523A (en) * 2009-07-23 2013-01-07 クゥアルコム・インコーポレイテッドQualcomm Incorporated Method and apparatus for controlling mobile devices and consumer electronic devices
JP2013101649A (en) * 2012-12-26 2013-05-23 Japan Research Institute Ltd Terminal device and computer program
JP2013157959A (en) * 2012-01-31 2013-08-15 Toshiba Corp Portable terminal apparatus, voice recognition processing method for the same, and program
JP2013536660A (en) * 2010-08-31 2013-09-19 ヴォルフガング・ブレンデル Wireless remote control by position sensor system
WO2013157630A1 (en) * 2012-04-20 2013-10-24 株式会社ニコン Electronic apparatus and motion detection method
US8587417B2 (en) 2008-07-15 2013-11-19 Immersion Corporation Systems and methods for mapping message contents to virtual physical properties for vibrotactile messaging
JP2014508366A (en) * 2011-03-14 2014-04-03 ムラタ エレクトロニクス オサケユキチュア Pointing method, device and system therefor
JP2014168522A (en) * 2013-03-01 2014-09-18 Toshiba Corp X-ray diagnostic apparatus
US8872899B2 (en) 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
US8878896B2 (en) 2005-10-31 2014-11-04 Extreme Reality Ltd. Apparatus method and system for imaging
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US8928654B2 (en) 2004-07-30 2015-01-06 Extreme Reality Ltd. Methods, systems, devices and associated processing logic for generating stereoscopic images and video
JP2015005197A (en) * 2013-06-21 2015-01-08 カシオ計算機株式会社 Information processing apparatus, information processing method, and program
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
US9177220B2 (en) 2004-07-30 2015-11-03 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
US9218126B2 (en) 2009-09-21 2015-12-22 Extreme Reality Ltd. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
US9347968B2 (en) 2011-07-04 2016-05-24 Nikon Corporation Electronic device and input method
WO2017094346A1 (en) * 2015-12-01 2017-06-08 ソニー株式会社 Information processing device, information processing method, and program
JP2017174446A (en) * 2009-03-12 2017-09-28 イマージョン コーポレーションImmersion Corporation Systems and methods for using textures in graphical user interface widgets
JP2018523200A (en) * 2015-06-26 2018-08-16 インテル コーポレイション Technology for input gesture control of wearable computing devices based on fine motion

Families Citing this family (89)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050222801A1 (en) 2004-04-06 2005-10-06 Thomas Wulff System and method for monitoring a mobile computing product/arrangement
AT417460T (en) 2005-05-12 2008-12-15 Tcl & Alcatel Mobile Phones Method for synchronizing at least one multimediaaperipherat device a portable communication device and compressed communication device
US7822513B2 (en) 2005-07-27 2010-10-26 Symbol Technologies, Inc. System and method for monitoring a mobile computing product/arrangement
TWI316195B (en) 2005-12-01 2009-10-21 Ind Tech Res Inst Input means for interactive devices
DK1806643T3 (en) 2006-01-06 2014-12-08 Drnc Holdings Inc Method of introducing commands and / or characters to a portable communication device equipped with an inclination sensor
CN100429610C (en) * 2006-01-19 2008-10-29 宏达国际电子股份有限公司 Intuition type screen controller
KR100667853B1 (en) 2006-01-25 2007-01-11 삼성전자주식회사 Apparatus and method for scrolling screen in portable device and recording medium storing program for performing the method thereof
US7667686B2 (en) * 2006-02-01 2010-02-23 Memsic, Inc. Air-writing and motion sensing input for portable devices
JP4747874B2 (en) * 2006-02-14 2011-08-17 パナソニック電工株式会社 Remote control device and remote control system
US7796052B2 (en) 2006-03-29 2010-09-14 Honeywell International Inc. One button multifunction key fob for controlling a security system
US20100045705A1 (en) * 2006-03-30 2010-02-25 Roel Vertegaal Interaction techniques for flexible displays
JP2007286812A (en) * 2006-04-14 2007-11-01 Sony Corp Portable electronic equipment, user interface control method, and program
US8594742B2 (en) * 2006-06-21 2013-11-26 Symbol Technologies, Inc. System and method for monitoring a mobile device
US20080030456A1 (en) * 2006-07-19 2008-02-07 Sony Ericsson Mobile Communications Ab Apparatus and Methods for Providing Motion Responsive Output Modifications in an Electronic Device
US20080030464A1 (en) * 2006-08-03 2008-02-07 Mark Sohm Motion-based user interface for handheld
US8106856B2 (en) 2006-09-06 2012-01-31 Apple Inc. Portable electronic device for photo management
AU2015201028B2 (en) * 2006-09-06 2017-03-30 Apple Inc. Electronic device for digital object management
JP5023073B2 (en) * 2006-12-06 2012-09-12 アルプス電気株式会社 Motion sensing program and electronic compass provided with the same
US7884805B2 (en) * 2007-04-17 2011-02-08 Sony Ericsson Mobile Communications Ab Using touches to transfer information between devices
US8250921B2 (en) 2007-07-06 2012-08-28 Invensense, Inc. Integrated motion processing unit (MPU) with MEMS inertial sensing and embedded digital electronics
TWI333156B (en) * 2007-08-16 2010-11-11 Ind Tech Res Inst Inertia sensing input controller and receiver and interactive system using thereof
US8432365B2 (en) 2007-08-30 2013-04-30 Lg Electronics Inc. Apparatus and method for providing feedback for three-dimensional touchscreen
US8219936B2 (en) 2007-08-30 2012-07-10 Lg Electronics Inc. User interface for a mobile device using a user's gesture in the proximity of an electronic device
US8942764B2 (en) 2007-10-01 2015-01-27 Apple Inc. Personal media device controlled via user initiated movements utilizing movement based interfaces
US7934423B2 (en) 2007-12-10 2011-05-03 Invensense, Inc. Vertically integrated 3-axis MEMS angular accelerometer with integrated electronics
DE102007060007A1 (en) 2007-12-13 2009-06-18 BSH Bosch und Siemens Hausgeräte GmbH Control device for a domestic appliance, a domestic appliance with an operating device, and a method for operating a domestic appliance
KR20090065040A (en) 2007-12-17 2009-06-22 삼성전자주식회사 Dual pointing device and method based on 3-d motion and touch sensors
US8952832B2 (en) 2008-01-18 2015-02-10 Invensense, Inc. Interfacing application programs and motion sensors of a device
US9513704B2 (en) 2008-03-12 2016-12-06 Immersion Corporation Haptically enabled user interface
KR101482115B1 (en) 2008-07-07 2015-01-13 엘지전자 주식회사 Controlling a Mobile Terminal with a Gyro-Sensor
KR101524616B1 (en) * 2008-07-07 2015-06-02 엘지전자 주식회사 Controlling a Mobile Terminal with a Gyro-Sensor
JP5793426B2 (en) * 2009-01-29 2015-10-14 イマージョン コーポレーションImmersion Corporation System and method for interpreting physical interaction with a graphical user interface
KR101482121B1 (en) * 2008-08-04 2015-01-13 엘지전자 주식회사 Controlling a Mobile Terminal Capable of Web Browsing
KR101061363B1 (en) 2008-08-26 2011-09-01 팅크웨어(주) 3D control system specialized in navigation system and its method
JP6151368B2 (en) * 2012-10-25 2017-06-21 ナイキ イノベイト シーブイ System and method for monitoring athletic performance in a team sports environment
CN104503578B (en) * 2009-07-22 2018-02-06 意美森公司 Interactive touch-screen game symbol with the touch feedback across platform
US20110054833A1 (en) * 2009-09-02 2011-03-03 Apple Inc. Processing motion sensor data using accessible templates
US8780069B2 (en) 2009-09-25 2014-07-15 Apple Inc. Device, method, and graphical user interface for manipulating user interface objects
US9174123B2 (en) 2009-11-09 2015-11-03 Invensense, Inc. Handheld computer systems and techniques for character and command recognition related to human movements
KR101646671B1 (en) * 2009-12-10 2016-08-08 삼성전자주식회사 Portable electronic apparatus and control method thereof
FR2954533A1 (en) * 2009-12-21 2011-06-24 Air Liquide Portable terminal i.e. hand-held computer, for transmitting information or orders in industrial area, has gyroscope measuring orientation parameters of case, and accelerometer measuring acceleration parameters of case during movements
US8698762B2 (en) 2010-01-06 2014-04-15 Apple Inc. Device, method, and graphical user interface for navigating and displaying content in context
JP5413250B2 (en) * 2010-03-05 2014-02-12 ソニー株式会社 Image processing apparatus, image processing method, and program
DE102010020925B4 (en) 2010-05-10 2014-02-27 Faro Technologies, Inc. Method for optically scanning and measuring an environment
KR101219292B1 (en) 2010-06-16 2013-01-18 (주)마이크로인피니티 Hand-held device including a display and method for navigating objects on the display
CN102298162B (en) * 2010-06-28 2014-03-05 深圳富泰宏精密工业有限公司 Backlight regulating system and method
US8767019B2 (en) 2010-08-31 2014-07-01 Sovanta Ag Computer-implemented method for specifying a processing operation
US8972467B2 (en) 2010-08-31 2015-03-03 Sovanta Ag Method for selecting a data set from a plurality of data sets by means of an input device
CN102647504B (en) 2011-02-16 2013-07-17 三星电子(中国)研发中心 Method for controlling applications in mobile phone
JP5762885B2 (en) * 2011-08-29 2015-08-12 京セラ株式会社 apparatus, method, and program
US9002739B2 (en) * 2011-12-07 2015-04-07 Visa International Service Association Method and system for signature capture
JP2013154767A (en) * 2012-01-30 2013-08-15 Mitsubishi Electric Corp Onboard meter editing apparatus
US9411423B2 (en) 2012-02-08 2016-08-09 Immersion Corporation Method and apparatus for haptic flex gesturing
KR101772384B1 (en) * 2012-03-25 2017-08-29 인텔 코포레이션 Orientation sensing computing devices
KR101966695B1 (en) * 2012-06-22 2019-04-08 삼성전자 주식회사 Method and apparatus for processing a memo during voice communication in a terminal equipment having a touch input device
CN103576847B (en) * 2012-08-09 2016-03-30 腾讯科技(深圳)有限公司 Obtain the method and apparatus of account information
US8493354B1 (en) 2012-08-23 2013-07-23 Immersion Corporation Interactivity model for shared feedback on mobile devices
DE102012109481A1 (en) 2012-10-05 2014-04-10 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US10067231B2 (en) 2012-10-05 2018-09-04 Faro Technologies, Inc. Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner
US8994827B2 (en) 2012-11-20 2015-03-31 Samsung Electronics Co., Ltd Wearable electronic device
US10423214B2 (en) 2012-11-20 2019-09-24 Samsung Electronics Company, Ltd Delegating processing from wearable electronic device
US10185416B2 (en) * 2012-11-20 2019-01-22 Samsung Electronics Co., Ltd. User gesture input to wearable electronic device involving movement of device
US10551928B2 (en) 2012-11-20 2020-02-04 Samsung Electronics Company, Ltd. GUI transitions on wearable electronic device
JP6042753B2 (en) * 2013-03-18 2016-12-14 株式会社Nttドコモ Terminal device and operation lock releasing method
EP3007042A4 (en) 2013-06-07 2017-06-28 Seiko Epson Corporation Electronic device and tap operation detection method
JP2014238696A (en) * 2013-06-07 2014-12-18 セイコーエプソン株式会社 Electronic apparatus and tap operation detection method
KR20150026056A (en) 2013-08-30 2015-03-11 삼성전자주식회사 An electronic device with curved bottom and operating method thereof
DE112014004636T5 (en) * 2013-10-08 2016-07-07 Tk Holdings Inc. Force-based touch interface with integrated multisensory feedback
US20160299570A1 (en) * 2013-10-24 2016-10-13 Apple Inc. Wristband device input using wrist movement
US10691332B2 (en) 2014-02-28 2020-06-23 Samsung Electronics Company, Ltd. Text input on an interactive display
KR20160001228A (en) * 2014-06-26 2016-01-06 엘지전자 주식회사 Mobile terminal and method for controlling the same
US9811164B2 (en) 2014-08-07 2017-11-07 Google Inc. Radar-based gesture sensing and data transmission
CN105447350B (en) 2014-08-07 2019-10-01 阿里巴巴集团控股有限公司 A kind of identity identifying method and device
US9671221B2 (en) 2014-09-10 2017-06-06 Faro Technologies, Inc. Portable device for optically measuring three-dimensional coordinates
DE102014013677B4 (en) 2014-09-10 2017-06-22 Faro Technologies, Inc. Method for optically scanning and measuring an environment with a handheld scanner and subdivided display
DE102014013678B3 (en) 2014-09-10 2015-12-03 Faro Technologies, Inc. Method for optically sensing and measuring an environment with a handheld scanner and gesture control
JP2017528714A (en) * 2014-09-10 2017-09-28 ファロ テクノロジーズ インコーポレーテッド Method for optical measurement of three-dimensional coordinates and control of a three-dimensional measuring device
US9693040B2 (en) 2014-09-10 2017-06-27 Faro Technologies, Inc. Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device
US9602811B2 (en) 2014-09-10 2017-03-21 Faro Technologies, Inc. Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
US9600080B2 (en) 2014-10-02 2017-03-21 Google Inc. Non-line-of-sight radar-based gesture recognition
US9746930B2 (en) 2015-03-26 2017-08-29 General Electric Company Detection and usability of personal electronic devices for field engineers
US9652125B2 (en) 2015-06-18 2017-05-16 Apple Inc. Device, method, and graphical user interface for navigating media content
US9928029B2 (en) 2015-09-08 2018-03-27 Apple Inc. Device, method, and graphical user interface for providing audiovisual feedback
US9990113B2 (en) 2015-09-08 2018-06-05 Apple Inc. Devices, methods, and graphical user interfaces for moving a current focus using a touch-sensitive remote control
EP3392740A4 (en) * 2015-12-18 2018-12-19 Sony Corporation Information processing device, information processing method, and program
DE102015122844A1 (en) 2015-12-27 2017-06-29 Faro Technologies, Inc. 3D measuring device with battery pack
US10324973B2 (en) 2016-06-12 2019-06-18 Apple Inc. Knowledge graph metadata network based on notable moments
DK201670609A1 (en) 2016-06-12 2018-01-02 Apple Inc User interfaces for retrieving contextually relevant media content
DE102016114376A1 (en) * 2016-08-03 2018-02-08 Denso Corporation Feedback-supported remote control for vehicle doors

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0728591A (en) * 1993-05-13 1995-01-31 Toshiba Corp Space manipulation mouse system and space operation pattern input method
JPH0798630A (en) * 1993-09-28 1995-04-11 Towa Electron Kk Three-dimensional position input device
JP3792832B2 (en) * 1997-05-07 2006-07-05 富士重工業株式会社 Stereo camera adjustment device
AU759440B2 (en) * 1998-01-26 2003-04-17 Apple Inc. Method and apparatus for integrating manual input
JP3022558B1 (en) * 1998-05-21 2000-03-21 日本電信電話株式会社 Three-dimensional display method and device
JP2000097637A (en) * 1998-09-24 2000-04-07 Olympus Optical Co Ltd Attitude position detecting device
JP2000132305A (en) * 1998-10-23 2000-05-12 Olympus Optical Co Ltd Operation input device
US6288704B1 (en) 1999-06-08 2001-09-11 Vega, Vista, Inc. Motion detection and tracking system to control navigation and display of object viewers
US6466198B1 (en) * 1999-11-05 2002-10-15 Innoventions, Inc. View navigation and magnification of a hand-held device with a display
AU5657601A (en) * 2000-05-12 2001-11-20 Zvi Lapidot Apparatus and method for the kinematic control of hand-held devices
AUPQ896000A0 (en) * 2000-07-24 2000-08-17 Seeing Machines Pty Ltd Facial image processing system
US6798429B2 (en) * 2001-03-29 2004-09-28 Intel Corporation Intuitive mobile device interface to virtual spaces
WO2003001340A2 (en) * 2001-06-22 2003-01-03 Motion Sense Corporation Gesture recognition system and method
GB2378878B (en) * 2001-06-28 2005-10-05 Ubinetics Ltd A handheld display device

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9177220B2 (en) 2004-07-30 2015-11-03 Extreme Reality Ltd. System and method for 3D space-dimension based image processing
US8928654B2 (en) 2004-07-30 2015-01-06 Extreme Reality Ltd. Methods, systems, devices and associated processing logic for generating stereoscopic images and video
US8872899B2 (en) 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
US8878896B2 (en) 2005-10-31 2014-11-04 Extreme Reality Ltd. Apparatus method and system for imaging
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
US9131220B2 (en) 2005-10-31 2015-09-08 Extreme Reality Ltd. Apparatus method and system for imaging
JP2007193656A (en) * 2006-01-20 2007-08-02 Kddi Corp Personal identification device
JP2010011431A (en) * 2008-05-27 2010-01-14 Toshiba Corp Wireless communication apparatus
JP2011526192A (en) * 2008-06-27 2011-10-06 マイクロソフト コーポレーション Dynamic selection of tilt function sensitivity
US10416775B2 (en) 2008-07-15 2019-09-17 Immersion Corporation Systems and methods for shifting haptic feedback function between passive and active modes
US9612662B2 (en) 2008-07-15 2017-04-04 Immersion Corporation Systems and methods for shifting haptic feedback function between passive and active modes
US10019061B2 (en) 2008-07-15 2018-07-10 Immersion Corporation Systems and methods for haptic message transmission
US9134803B2 (en) 2008-07-15 2015-09-15 Immersion Corporation Systems and methods for mapping message contents to virtual physical properties for vibrotactile messaging
US10198078B2 (en) 2008-07-15 2019-02-05 Immersion Corporation Systems and methods for mapping message contents to virtual physical properties for vibrotactile messaging
US9063571B2 (en) 2008-07-15 2015-06-23 Immersion Corporation Systems and methods for shifting haptic feedback function between passive and active modes
US8638301B2 (en) 2008-07-15 2014-01-28 Immersion Corporation Systems and methods for transmitting haptic messages
US8976112B2 (en) 2008-07-15 2015-03-10 Immersion Corporation Systems and methods for transmitting haptic messages
US8587417B2 (en) 2008-07-15 2013-11-19 Immersion Corporation Systems and methods for mapping message contents to virtual physical properties for vibrotactile messaging
US10203756B2 (en) 2008-07-15 2019-02-12 Immersion Corporation Systems and methods for shifting haptic feedback function between passive and active modes
US10248203B2 (en) 2008-07-15 2019-04-02 Immersion Corporation Systems and methods for physics-based tactile messaging
US8866602B2 (en) 2008-07-15 2014-10-21 Immersion Corporation Systems and methods for mapping message contents to virtual physical properties for vibrotactile messaging
WO2010010929A1 (en) * 2008-07-23 2010-01-28 株式会社セガ Game device, method for controlling game, game control program and computer readable recording medium storing program
JP2013175242A (en) * 2008-09-04 2013-09-05 Extreme Reality Ltd Israel Method system and software for providing image sensor based human machine interfacing
JP2012502344A (en) * 2008-09-04 2012-01-26 エクストリーム リアリティー エルティーディー.Extreme Reality Ltd. Method system and software for providing an image sensor based human machine interface
JP2010067142A (en) 2008-09-12 2010-03-25 British Virgin Islands CyWee Group Ltd Inertia sensing device
JP2012506100A (en) * 2008-10-15 2012-03-08 インベンセンス,インク.Invensense,Inc. Mobile device with gesture recognition
JP2012507802A (en) * 2008-10-29 2012-03-29 インベンセンス,インク.Invensense,Inc. Control and access content using motion processing on mobile devices
KR101568128B1 (en) * 2008-11-14 2015-11-12 삼성전자주식회사 Method for operating user interface based on motion sensor and mobile terminal using the same
JP2010118060A (en) * 2008-11-14 2010-05-27 Samsung Electronics Co Ltd Method for operating ui based on motion sensor and mobile terminal using the same
JP2012510109A (en) * 2008-11-24 2012-04-26 クアルコム,インコーポレイテッド Illustrated method for selecting and activating an application
US9679400B2 (en) 2008-11-24 2017-06-13 Qualcomm Incorporated Pictoral methods for application selection and activation
US9501694B2 (en) 2008-11-24 2016-11-22 Qualcomm Incorporated Pictorial methods for application selection and activation
JP2017174446A (en) * 2009-03-12 2017-09-28 イマージョン コーポレーションImmersion Corporation Systems and methods for using textures in graphical user interface widgets
US10379618B2 (en) 2009-03-12 2019-08-13 Immersion Corporation Systems and methods for using textures in graphical user interface widgets
US10564721B2 (en) 2009-03-12 2020-02-18 Immersion Corporation Systems and methods for using multiple actuators to realize textures
JP2012530958A (en) * 2009-06-19 2012-12-06 アルカテル−ルーセント Gesture on a touch sensitive input device to close a window or application
JP2011018161A (en) * 2009-07-08 2011-01-27 Nec Corp Portable terminal, and application operation method in portable terminal
US9030404B2 (en) 2009-07-23 2015-05-12 Qualcomm Incorporated Method and apparatus for distributed user interfaces using wearable devices to control mobile and consumer electronic devices
JP2013500523A (en) * 2009-07-23 2013-01-07 クゥアルコム・インコーポレイテッドQualcomm Incorporated Method and apparatus for controlling mobile devices and consumer electronic devices
US9000887B2 (en) 2009-07-23 2015-04-07 Qualcomm Incorporated Method and apparatus for communicating control information by a wearable device to control mobile and consumer electronic devices
US9024865B2 (en) 2009-07-23 2015-05-05 Qualcomm Incorporated Method and apparatus for controlling mobile and consumer electronic devices
US9218126B2 (en) 2009-09-21 2015-12-22 Extreme Reality Ltd. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
JP2013536660A (en) * 2010-08-31 2013-09-19 ヴォルフガング・ブレンデル Wireless remote control by position sensor system
JP2012165073A (en) * 2011-02-03 2012-08-30 Sony Corp Controller, control method, and program
US8994516B2 (en) 2011-02-03 2015-03-31 Sony Corporation Control device, control method, and program
US9372549B2 (en) 2011-03-14 2016-06-21 Murata Electronics Oy Pointing method, a device and system for the same
JP2014508366A (en) * 2011-03-14 2014-04-03 ムラタ エレクトロニクス オサケユキチュア Pointing method, device and system therefor
JP2012244264A (en) * 2011-05-17 2012-12-10 Funai Electric Co Ltd Image forming device
US9347968B2 (en) 2011-07-04 2016-05-24 Nikon Corporation Electronic device and input method
JP2013157959A (en) * 2012-01-31 2013-08-15 Toshiba Corp Portable terminal apparatus, voice recognition processing method for the same, and program
WO2013157630A1 (en) * 2012-04-20 2013-10-24 株式会社ニコン Electronic apparatus and motion detection method
JP2013101649A (en) * 2012-12-26 2013-05-23 Japan Research Institute Ltd Terminal device and computer program
JP2014168522A (en) * 2013-03-01 2014-09-18 Toshiba Corp X-ray diagnostic apparatus
JP2015005197A (en) * 2013-06-21 2015-01-08 カシオ計算機株式会社 Information processing apparatus, information processing method, and program
JP2018523200A (en) * 2015-06-26 2018-08-16 インテル コーポレイション Technology for input gesture control of wearable computing devices based on fine motion
WO2017094346A1 (en) * 2015-12-01 2017-06-08 ソニー株式会社 Information processing device, information processing method, and program

Also Published As

Publication number Publication date
WO2005103863A2 (en) 2005-11-03
JP2008299866A (en) 2008-12-11
JP4812812B2 (en) 2011-11-09
KR100853605B1 (en) 2008-08-22
EP1728142B1 (en) 2010-08-04
KR20060134119A (en) 2006-12-27
WO2005103863A3 (en) 2006-01-26
EP1728142A2 (en) 2006-12-06

Similar Documents

Publication Publication Date Title
AU2016331484B2 (en) Intelligent device identification
AU2018204174B2 (en) Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs
US10209877B2 (en) Touch screen device, method, and graphical user interface for moving on-screen objects without using a cursor
EP3040684B1 (en) Mobile terminal and control method for the mobile terminal
KR101929372B1 (en) Transition from use of one device to another
US10420064B2 (en) Tactile feedback in an electronic device
US9747072B2 (en) Context-aware notifications
US9304583B2 (en) Movement recognition as input mechanism
US9986391B2 (en) Automated generation of recommended response messages
WO2016119696A1 (en) Action based identity identification system and method
CN104487927B (en) For selecting the equipment, method and graphic user interface of user interface object
EP3041201B1 (en) User terminal device and control method thereof
JP6275706B2 (en) Text recognition driven functionality
EP2801899B1 (en) Method, device and system for providing a private page
JP2020077403A (en) User interface for loyalty and private label accounts for wearable devices
CN102955653B (en) For guide to visitors and preview content item destination device, method and graphical user interface
CN105264477B (en) Equipment, method and graphic user interface for mobile user interface object
KR101934822B1 (en) Unlocking method of mobile terminal and the mobile terminal
US9104293B1 (en) User interface points of interest approaches for mapping applications
JP5951781B2 (en) Multidimensional interface
JP2017513126A (en) Apparatus and method for a ring computing device
JP6185656B2 (en) Mobile device interface
US8368723B1 (en) User input combination of touch and user position
NL2008029C2 (en) Device, method, and graphical user interface for switching between two user interfaces.
US20170357973A1 (en) User interfaces for transactions

Legal Events

Date Code Title Description
A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080527

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080725

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20081216